Formerly known as Trevor1

Not to be confused with the Trevor who works at Open Phil.

Did I write something confusing? It might not look like it, but my posts and comments leave a lot unsaid. DMs open.


Intelligence amplification Right Now

Wiki Contributions


I think that rallying the AI safety movement behind consistent strategies is like herding cats in lots of different ways, which EA vegan advocacy largely isn't, because AI safety is largely a meritocracy whereas EA vegan advocacy is much more open source. EA vegan advocacy has no geopolitics and no infohazards, their most galaxy-brained plot is meat substitute technology and their greatest adversary is food industry PR.

Furthermore, the world is changing around the AI safety, especially during 2020 and 2022, and possibly in a rather hostile way. This makes open-source participation even less feasible than it was during the 2010s.

My thinking about this (3 minute read) is that EA will be deliberately hijacked by an external organization or force, not a gradual erosion of epistemic norms causing EA to become less healthy, which is the focus of this post. I generally focus on high-tech cognitive hacks (e.g. the combination of social media news feed algorithms/user data with multi-armed bandit algorithms, data scientists, and psychology researchers, to steer people's thinking in measurable directions, primarily towards mindless news feed scrolling/zombie mode which is instrumental for a wide variety of other goals), not the old-fashioned use of writing by humans that this post focuses on (using human intelligence to find galaxy-brained combinations of words that maximize for effect). 

But I think that an internal conflict between animal welfare and the rest of EA is at risk of being exploited by outsiders, particularly vested interests that EA steps on such as those related to the AI race (e.g. Facebook or Intelligence Agencies).

Odds that e/acc is just futurism, culturally steered by gradient descent towards whatever memes most effectively correlate with corporate AI engineers enthusiastically choosing AI capabilities research over AI safety? I can list at least two corporations that can and would do something like that (not willing to make specific indictments at this time).

I'm really glad that this is evaluated. I don't think people realize just how much is downstream of EA's community building- if EA grows at a rate of 1.5x per year, then EA's size is less than 6 years out from dectoupling (10x). No matter who you are or what you are doing, EA dectoupling in size will inescapably saturate your environment with people, so if that's going to happen then it should at least be done right instead of wrong. That shouldn't be a big ask.

The real challenge is creating an epistemic immune system that can fight threats we can’t even detect yet.

Hard agree on this. If slow takeoff happens soon, this will inescapably become an even more serious problem than it already is. There are so many awful and complicated things contained within "threats we can't even detect yet" when you're dealing with historically unprecedented information environments.

I think more independent AI safety orgs introduces more liabilities and points of failure, such as infohazard leaks, unilateralist curse, accidental capabilities research, mental health spirals, and inter-org conflict. Rather, there should be sub-orgs that are underneath main orgs both de-jure and de-facto, with full leadership subordination and limited access to infohazardous information.

Afaik sugar has a withdrawal period, peaks something like 2 days and a week and mostly tapers off by 2 weeks. No idea how non-processed sugar plays into it, only that dehydration thirst amplifies the symptoms and drinking food and water alleviates them after a time delay (possibly sugar withdrawal reduces propensity to drink, but that would be very hard to test and that one's probably just me anyway). The high from a sugar rush, and the perceived good-tastingness of the processed desserts, depend almost entirely on how many days since the last dose, I think 2-5 days had the strongest effect. I strongly suspect that this is what makes people attracted to sugar, especially children, since they don't count the days since their last dose.

I had a hard time finding good info about this, but a quick Google search explicitly for "sugar withdrawal" gave me results that all seemed to be pointing in the same direction. If Google were to make it easier to find than just for people who type in "sugar withdrawal", maybe that would be interpreted declaring war on a large chunk of the food industry in a very severe and visible way. Or maybe tons of people are writing articles on a foundation of a few crappy papers. I verified the dehydration effect, the timelines, and the ultimate outcome with myself and my mother (biological), but genetic diversity will likely require larger sample sizes for better estimates. 

But it really does seem like people can just quit, but there could be potentially harsh consequences for flip flopping and not quitting all the way, and without good data on the effect on the brain (some kind of swelling?) I don't see people choosing to quit, even if the lion's share of the reinforcement was proven to come from rare occasions where people end up eating sugar during a specific phase of withdrawal and get surprised by massive hedons.

I was thinking more along the lines of SIGINT than HUMINT, and more along the lines of external threats than internal threats. I suppose that along the HUMINT lines, Facebook AI Labs hires private investigators to follow around Yudkowsky to find dirt on him, or the NSA starts contaminating people's mail/packages/food deliveries with IQ-reducing neurotoxins, then yes, I'm definitely saying there should be someone to immediately take strategic action. We're the people who predicted Slow Takeoff, that the entire world (and, by extension, the world's intelligence agencies, whose various operations will all be interrupted by the revelation that all things revolve around AI alignment) would come apart at the seams, decades in advance; and we also don't have the option of giving up on being extraordinary and significant, so we should expect things to get really serious at some point or another. On issues more along traditional community health lines, that's not my division.

I think that that Habryka podcast has a lot of potential for projects, it just needs a wide variety of people to build off of it.

I think that this idea came from a thought process that generally generates good ideas (e.g. an LLM API that predicts whether someone has read the Sequences), but this time, due to bad luck, it ended up outputting an extremely bad idea. (I didn't downvote)

Load More