There's a kind of game here on Less Wrong.
It's the kind of game that's a little rude to point out. Part of how it works is by not being named.
Or rather, attempts to name it get dissected so everyone can agree to continue ignoring the fact that it's a game.
So I'm going to do the rude thing. But I mean to do so gently. It's not my intention to end the game. I really do respect the right for folk to keep playing it if they want.
Instead I want to offer an exit to those who would really, really like one.
I know I really super would have liked that back in 2015 & 2016. That was the peak of my hell in rationalist circles.
I'm watching the game intensify this year. Folk have been talking about this a lot. How there's a ton more talk of AI here, and a stronger tone of doom.
I bet this is just too intense for some folk. It was for me when I was playing. I just didn't know how to stop. I kind of had to break down in order to stop. All the way to a brush with severe depression and suicide.
And it also ate parts of my life I dearly, dearly wish I could get back.
So, in case this is audible and precious to some of you, I'd like to point a way to ease.
The Apocalypse Game
The upshot is this:
You have to live in a kind of mental illusion to be in terror of the end of the world.
Illusions don't look on the inside like illusions. They look like how things really are.
Part of how this one does the "daughter's arm" thing is by redirecting attention to facts and arguments.
- "Here's why the argument about AI makes sense."
- "Do you have some alternative view of what will happen? How do you address XYZ?"
- "What makes it an 'illusion'? I challenge that framing because it dismisses our ability to analyze and understand yada yada."
None of this is relevant.
I'm pointing at something that comes before these thoughts. The thing that fuels the fixation on the worldview.
I also bet this is the thing that occasionally drives some people in this space psychotic, depressed, or into burnout.
The basic engine is:
- There's a kind of underlying body-level pain. I would tag this as "emotional pain" but it's important to understand that I really am pointing at physical sensations.
- The pain is kind of stored and ignored. Often it arose from a very young age but was too overwhelming, so child-you found methods of distraction.
- This is the basic core of addiction. Addictions are when there's an intolerable sensation but you find a way to bear its presence without addressing its cause. The more that distraction becomes a habit, the more that's the thing you automatically turn to when the sensation arises. This dynamic becomes desperate and life-destroying to the extent that it triggers a red queen race.
- A major unifying flavor of the LW attractor is intense thought as an addictive distraction. And the underlying flavor of pain that fuels this addiction is usually some variation of fear.
- In not-so-coincidental analogy to uFAI, these distracting thoughts can come to form autonomous programs that memetically evolve to have something like survival and reproductive instincts — especially in the space between people as they share and discuss these thoughts with each other.
- The rationalist memeplex focuses on AI Ragnarok in part because it's a way for the intense thought to pull fuel from the underlying fear.
In this case, the search for truth isn't in service to seeing reality clearly. The logic of economic races to the bottom, orthogonality, etc. might very well be perfectly correct.
But these thoughts are also (and in some cases, mostly) in service to the doomsday meme's survival.
But I know that thinking of memes as living beings is something of an ontological leap in these parts. It's totally compatible with the LW memeplex, but it seems to be too woo-adjacent and triggers an unhelpful allergic response.
So I suggested a reframe at the beginning, which I'll reiterate here:
Your body's fight-or-flight system is being used as a power source to run a game, called "OMG AI risk is real!!!"
And part of how that game works is by shoving you into a frame where it seems absolutely fucking real. That this is the truth. This is how reality just is.
And this can be fun!
And who knows, maybe you can play this game and "win". Maybe you'll have some kind of real positive impact that matters outside of the game.
But… well, for what it's worth, as someone who turned off the game and has reworked his body's use of power quite a lot, it's pretty obvious to me that this isn't how it works. If playing this game has any real effect on the true world situation, it's to make the thing you're fearing worse.
(…which is exactly what's incentivized by the game's design, if you'll notice.)
I want to emphasize — again — that I am not saying that AI risk isn't real.
I'm saying that really, truly orienting to that issue isn't what LW is actually about.
That's not the game being played here. Not collectively.
But the game that is being played here absolutely must seem on the inside like that is what you're doing.
Ramping Up Intensity
When Eliezer rang the doom bell, my immediate thought was:
"Ah, look! The gamesmaster has upped the intensity. Like preparing for a climax!"
I mean this with respect and admiration. It's very skillful. Eliezer has incredible mastery in how he weaves terror and insight together.
And I don't mean this at all to dismiss what he's saying. Though I do disagree with him about overall strategy. But it's a sincere disagreement, not a "Oh look, what a fool" kind of thing.
What I mean is, it's a masterful move of making the game even more awesome.
(…although I doubt he consciously intended it that way!)
I remember when I was in the thick of this AI apocalypse story, everything felt so… epic. Even questions of how CFAR dealt with garbage at its workshops seemed directly related to whether humanity would survive the coming decades. The whole experience was often thrilling.
And on the flipside, sometimes I'd collapse. Despair. "It's too much" or "Am I even relevant?" or "I think maybe we're just doomed."
These are the two sort of built-in physiological responses to fight-or-flight energy: activation, or collapse.
(There's a third, which is a kind of self-holding. But it has to be built. Infants aren't born with it. I'll point in that direction a bit later.)
In the spirit of feeling rationally, I'd like to point out something about this use of fight-or-flight energy:
If your body's emergency mobilization systems are running in response to an issue, but your survival doesn't actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately.
Which is to say: If you're freaked out but rushing around won't solve the problem, then you're living in a mental hallucination. And it's that hallucination that's scaring your body.
Again, this isn't to say that your thoughts are incorrectly perceiving a future problem.
But if it raises your blood pressure or quickens your breath, then you haven't integrated what you're seeing with the reality of your physical environment. Where you physically are now. Sitting here (or whatever) reading this text.
So… folk who are wringing their hands and feeling stressed about the looming end of the world via AI?
Y'all are hallucinating.
If you don't know what to do, and you're using anxiety to power your minds to figure out what to do…
…well, that's the game.
The real thing doesn't work that way.
But hey, this sure is thrilling, isn't it?
As long as you don't get stuck in that awful collapse space, or go psychotic, and join the fallen.
But the risk of that is part of the fun, isn't it?
(Interlude)
A brief interlude before I name the exit.
I want to emphasize again that I'm not trying to argue anyone out of doing this intense thing.
The issue is that this game is way, way out of range for lots of people. But some of those people keep playing it because they don't know how to stop.
And they often don't even know that there's something on this level to stop.
You're welcome to object to my framing, insist I'm missing some key point, etc.
Frankly I don't care.
I'm not writing this to engage with the whole space in some kind of debate about AI strategy or landscape or whatever.
I'm trying to offer a path to relief to those who need it.
That no, this doesn't have to be the end of the world.
And no, you don't have to grapple with AI to sort out this awful dread.
That's not where the problem really is.
I'm not interested in debating that. Not here right now.
I'm just pointing out something for those who can, and want to, hear it.
Land on Earth and Get Sober
So, if you're done cooking your nervous system and want out…
…but this AI thing gosh darn sure does look too real to ignore…
…what do you do?
My basic advice here is to land on Earth and get sober.
The thing driving this is a pain. You feel that pain when you look out at the threat and doom of AI, but you cover it up with thoughts. You pretend it's about this external thing.
I promise, it isn't.
I know. I really do understand. It really truly looks like it's about the external thing.
But… well, you know how when something awful happens and gets broadcast (like the recent shooting), some people look at it with a sense of "Oh, that's really sad" and are clearly impacted, while others utterly flip their shit?
Obviously the difference there isn't in the event, or in how they heard about it. Maybe sometimes, but not mostly.
The difference is in how the event lands for the listener. What they make it mean. What bits of hidden pain are ready to be activated.
You cannot orient in a reasonable way to something that activates and overwhelms you this way. Not without tremendous grounding work.
So rather than believing the distracting thoughts that you can somehow alleviate your terror and dread with external action…
…you've got to stop avoiding the internal sensation.
When I talked earlier about addiction, I didn't mean that just as an analogy. There's a serious withdrawal experience that happens here. Withdrawal from an addiction is basically a heightening of the intolerable sensation (along with having to fight mechanical habits of seeking relief via the addictive "substance").
So in this case, I'm talking about all this strategizing, and mental fixation, and trying to model the AI situation.
I'm not saying it's bad to do these things.
I'm saying that if you're doing them as a distraction from inner pain, you're basically drunk.
You have to be willing to face the awful experience of feeling, in your body, in an inescapable way, that you are terrified.
I sort of want to underline that "in your body" part a bazillion times. This is a spot I keep seeing rationalists miss — because the preferred recreational drug here is disembodiment via intense thinking. You've got to be willing to come back, again and again, to just feeling your body without story. Notice how you're looking at a screen, and can feel your feet if you try, and are breathing. Again and again.
It's also really, really important that you do this kindly. It's not a matter of forcing yourself to feel what's present all at once. You might not even be able to find the true underlying fear! Part of the effect of this particular "drug" is letting the mind lead. Making decisions based on mental computations. And kind of like minds can get entrained to porn, minds entrained to distraction via apocalypse fixation will often hide their power source from their host.
(In case that was too opaque for you just yet, I basically just said "Your thoughts will do what they can to distract you from your true underlying fear." People often suddenly go blank inside when they look inward this way.)
So instead of trying to force it all at once, it's a matter of titrating your exposure. Noticing that AI thoughts are coming up again, and pausing, and feeling what's going on in your body. Taking a breath for a few seconds. And then carrying on with whatever.
This is slow work. Unfortunately your "drug" supply is internal, so getting sober is quite a trick.
But this really is the exit. As your mind clears up… well, it's very much like coming out of the fog of a bender and realizing that no, really, those "great ideas" you had just… weren't great. And now you're paying the price on your body (and maybe your credit card too!).
There are tons of resources for this kind of direction. It gets semi-independently reinvented a lot, so there are lots of different names and frameworks for this. One example that I expect to be helpful for at least some LWers who want to land on Earth & get sober is Irene Lyon, who approaches this through a "trauma processing" framework. She offers plenty of free material on YouTube. Her angle is in the same vein as Gabor Maté and Peter Levine.
But hey, if you can feel the thread of truth in what I'm saying and want to pursue this direction, but you find you can't engage with Irene Lyon's approach, feel free to reach out to me. I might be able to find a different angle for you. I want anyone who wants freedom to find it.
But… but Val… what about the real AI problem?!
Okay, sure. I'll say a few words here.
…although I want to point out something: The need to have this answered is coming from the addiction to the game. It's not coming from the sobriety of your deepest clarity.
That's actually a complete answer, but I know it doesn't sound like one, so I'll say a little more.
Yes, there's a real thing.
And yes, there's something to do about it.
But you're almost certainly not in a position to see the real thing clearly or to know what to do about it.
And in fact, attempts to figure the real thing out and take action from this drunk gamer position will make things worse.
(I hesitate to use the word "worse" here. That's not how I see it. But I think that's how it translates to the in-game frame.)
This is what Buddhists should have meant (and maybe did/do?) when they talk about "karma". How deeply entangled in this game is your nervous system? Well, when you let that drive how you interact with others, their bodies get alarmed in similar ways, and they get more entangled too.
Memetic evolution drives how that entangling process happens on large scales. When that becomes a defining force, you end up with self-generating pockets of Hell on Earth.
This recent thing with FTX is totally an example. Totally. Threads of karma/trauma/whatever getting deeply entangled and knotted up and tight enough that large-scale flows of collective behavior create an intensely awful situation.
You do not solve this by trying harder. Tugging the threads harder.
In fact, that's how you make it worse.
This is what I meant when I said that actually dealing with AI isn't the true game in LW-type spaces, even though it sure seems like it on the inside.
It's actually helpful to the game for the situation to constantly seem barely maybe solvable but to have major setbacks.
And this really can arise from having a sincere desire to deal with the real problem!
But that sincere desire, when channeled into the Matrix of the game, doesn't have any power to do the real thing. There's no leverage.
The real thing isn't thrilling this way. It's not epic.
At least, not any more epic than holding someone you love, or taking a stroll through a park.
To oversimplify a bit: You cannot meaningfully help with the real thing until you're sober.
Now, if you want to get sober and then you roll up your sleeves and help…
…well, fuck yeah! Please. Your service would be a blessing to all of us. Truly. We need you.
But it's gotta come from a different place. Tortured mortals need not apply.
And frankly, the reason AI in particular looks like such a threat is because you're fucking smart. You're projecting your inner hell onto the external world. Your brilliant mind can create internal structures that might damn well take over and literally kill you if you don't take responsibility for this process. You're looking at your own internal AI risk.
I hesitate to point that out because I imagine it creating even more body alarm.
But it's the truth. Most people wringing their hands about AI seem to let their minds possess them more and more, and pour more & more energy into their minds, in a kind of runaway process that's stunningly analogous to uFAI.
The difference is, you don't have to make the entire world change in order to address this one.
You can take coherent internal action.
You can land on Earth and get sober.
That's the internal antidote.
It's what offers relief — eventually.
And from my vantage point, it's what leads to real hope for the world.
There's a class of things that could be described as losing trust in yourself and in your ability to reason.
For a mild example, a friend of mine who tutors people in math recounts that many people have low trust in their ability to mathematical reasoning. He often asks his students to speak out loud while solving a problem, to find out how they are approaching it. And some of them will say something along the lines of, "well, at this point it would make the most sense to me to [apply some simple technique], but I remember that when our teacher was demonstrating how to solve this, he used [some more advanced technique], so maybe I should instead do that".
The student who does that isn't trusting that the algorithm of "do what makes the most sense to me" will eventually lead to the correct outcome. Instead, they're trying to replace it with "do what I recall an authority figure doing, even if I don't understand why".
Now it could be that the simple technique is wrong to apply here, and the more advanced one is needed. But if the student had more self-trust and tried the thing that made the most sense to them, then their attempt to solve the problem using the simple approach might help them understand why that approach doesn't work and why they need another approach. Or maybe it's actually the case that the simple approach does work just as well - possibly the teacher did something needlessly complex, or maybe the student just misremembers what the teacher did. In which case they would have learned a simpler way of solving the problem.
Whereas if the student always just tries to copy what they remember the teacher doing - guessing the teacher's password, essentially - even if they do get it right, they won't develop a proper understanding of why it went right. The algorithm that they're running isn't "consider what you know of math and what makes the most sense in light of that", it's "try to recall instances of authority figures solving similar problems and do what they did". Which only works to the extent that you can recall instances of authority figures solving highly similar problems as the one that you are dealing with.
Why doesn't the student want to try their own approach first? After all, the worst that could happen is that it wouldn't work and they would have to try something else, right?
But if you have math trauma - if you've had difficulties with math and been humiliated for it - then trying an approach and failing at it isn't something that you could necessarily just shrug at. Instead, it will feel like another painful reminder that You Are Bad At Math and that You Will Never Figure This Out and that You Shouldn't Even Try. It might make you feel lost and disoriented and make you hope that someone would just tell you what to do. (It doesn't necessarily need to feel this extreme - it's enough if the thought of trying and failing just produces a mild flinch away from it.)
In this case, you need to find some reassurance that trying and failing is actually safe. To build trust in the notion that even if you do fail once, or twice, or thrice, or however many times it takes, you'll still be able to learn from each failure and figure out the right answer eventually. That's what enables you to do the thing that's required to actually learn. (Of course, some problems are just too hard and then you'll need to ask someone for guidance - but only after you've exhausted every approach that seemed promising to you.)
Now that's how it looks in the case of math. It's also possible to lose trust in yourself in other domains; e.g. Anna mentions here how learning about AI risk sometimes destabilizes self-trust when it comes to your career decisions:
And besides domain-specific self-trust, there also seems to be some relatively domain-general component of "how much do you trust your own ability to figure stuff out eventually". In all cases, I suspect that the self-mistrust has to do with feeling that it's not safe to trust yourself - because you've been punished for doing so in the past, or because you feel that AI risk is so important that there isn't any room to make mistakes.
But you still need to have trust in yourself. Knowing that yes, it's possible that trusting yourself will mean that you do make the wrong call and nothing catches you and then you die, but that's just the way it goes. Even if you decided to outsource your decisions to someone else, not only would that be unlikely to work, but you'd still need to trust your own ability in choosing who to outsource them to.
Scott Alexander has also speculated that depression involves a global lack of self-trust - predictive processing suggests that various neural predictions come with associated confidence levels. And "low global self-trust" may mean that the confidence your brain assigns even to predictions like "it's worth getting up from bed today" falls low enough so as to not be strongly motivating.
To go back to Elizabeth's original sentence... looked at from a certain angle, Val's post can be read as saying "those thoughts that you have about AI risk? Don't believe in them; believe in what I am saying instead". Read that way, that's a move that undermines self-trust. Stop thinking in terms of what makes sense to you, and replace that with what you think Val would approve of.
And while Val's post is not exactly talking about a lack of self-trust, it's talking about something in a related space. It's talking about how some experiences have been so painful that the body is in a constant low-grade anxiety/vigilance response, and the person isn't able to stop and be with those unpleasant sensations - similar to how the math student isn't able to stop and try out uncertain approaches, as it's too painful for the student to be with the unpleasant sensations of shame and humiliation of being Bad At Math.
Both "I'm feeling too anxious to trust myself" and "I'm feeling too anxious to stop thinking about AI" are problems that orient one's attention away from bodily sensations. "You can't fight fire with fire" - you can't solve anxiety about AI by with a move that creates more unpleasant bodily sensations and makes it harder to orient your attention to your body.