I think there's a bunch of useful stuff here. In particular, I think that decisions driven by deep-rooted fear are often very counterproductive, and that many rationalists often have "emergency mobilization systems" running in ways which aren't conducive to good long-term decision-making. I also think that paying attention to bodily responses is a great tool for helping fix this (and in fact was helpful for me in defusing annoyance when reading this post). But I want to push back on the way in which it's framed in various places as all-or-nothing: exit the game, or keep playing. Get sober, or stay drunk. Hallucination, not real fear.
In fact, you can do good and important work while also gradually coming to terms with your emotions, trying to get more grounded, and noticing when you're making decisions driven by visceral fear and taking steps to fix that. Indeed, I expect that almost all good and important work throughout history has been done by people who are at various stages throughout that process, rather than people who first dealt with their traumas and only then turned to the work. (EDIT: in a later comment, Valentine says he doesn't endorse the claim that people should deal...
it seems that "you need to do the trauma processing first and only then do useful work" is a harmful self-propagating meme in a very similar way as "you need to track and control every variable in order for AI to go well"
This. Trauma processing is just as prone to ouroboros-ing as x-risk work, if not more so.
Wouldn't it be relevant in that someone could recognize unproductive, toxic dynamics in their concerns about AI risk as per your point (if I understand you correctly), decide to process trauma first and then get stuck in the same sorts of traps? While "I'm traumatized and need to fix it before I can do anything" may not sound as flashy as "My light cone is in danger from unaligned, high-powered AI and I need to fix that before I can do anything", it's just as capable of paralyzing a person, and I speak both from my own past mistakes and from those of multiple friends.
Of course that's possible. I didn't mean to dismiss that part.
But… well, as I just wrote to Richard_Ngo:
If you just go around healing traumas willy-nilly, then you might not ever see through any particular illusion like this one if it's running in you.
Kind of like, generically working on trauma processing in general might or might not help an alcoholic quit drinking. There's some reason for hope, but it's possible to get lost in loops of navel-gazing, especially if they never ever even admit to themselves that they have a problem.
But if it's targeted, the addiction basically doesn't stand a chance.
I'm not trying to say "Just work on traumas and be Fully Healed™ before working on AI risk."
I'm saying something much, much more precise.
I do in fact think there's basically no point in someone working on AI risk if they don't dissolve this specific trauma structure.
Well, or at least make it fully conscious and build their nervous system holding capacity enough that it (a) they can watch it trying to run in real time and (b) they can also reliably stop it from grabbing their inner steering wheel so to speak.
But frankly, for most people it'd be easier just to fully integrate the pain than it would be to develop that level of general nervous system capacity without integrating said pain.
I don't understand how specifically you think the process of recognizing the illusion is related to the process of healing traumas. But I also object to ideas like "you need to orient towards your fear as an illusion first and only then do useful work", for roughly the same reasons (in particular, the way it's all-or-nothing). So I'll edit my original comment to clarify that this is a more central/less strawmanny objection.
If this post had just said "I think some people may feel strongly about AI x-risk for reasons that ultimately come down to some sort of emotional/physical pain whose origins have nothing to do with AI; here is why I think this, and here are some things you can do that might help find out whether you're one of them and to address the underlying problem if so", then I would consider it very valuable and deserving of attention and upvotes and whatnot. I think it's very plausible that this sort of thing is driving at least some AI-terror. I think it's very plausible that a lot of people on LW (and elsewhere) would benefit from paying more attention to their bodies.
... But that's not what this post does. It says you have to be "living in a[...] illusion" to be terrified by apocalyptic prospects. It says that if you are "feeling stressed" about AI risks then you are "hallucinating". It says that "what LW is actually about" is not actual AI risk and what to do about it (but, by implication, this alleged "game" of which Eliezer Yudkowsky is the "gamesmaster" that works by engaging everyone's fight-or-flight reactions to induce terror). It says that, for reasons beyond my understanding, it ...
I think this post is emblematic of the problem I have with most of Val's writing: there are useful nuggets of insight here and there, but you're meant to swallow them along with a metric ton of typical mind fallacy, projection, confirmation bias, and manipulative narrativemancy.
Elsewhere, Val has written words approximated by ~"I tried for years to fit my words into the shape the rationalists wanted me to, and now I've given up and I'm just going to speak my mind."
This is what it sounds like when you are blind to an important distinction. Trying to hedge magic things that you do not grok, engaging in cargo culting. If it feels like tediously shuffling around words and phrases that all mean exactly the same thing, you're missing the vast distances on the axis that you aren't perceiving.
The core message of "hey, you might well be caught up in a false narrative that is doing emotional work for you via providing some sense of meaning or purpose and yanking you around by your panic systems, and recognizing that fact can allow you to do anything else" is a good one, and indeed it's one that many LessWrongers need. It's even the sort of message that needs some kind of shock along wi...
I'm sure there are many people whose inner experience is like this. But, negative data point: Mine isn't. Not even a little. And yet, I still believe AGI is likely to wipe out humanity.
Seconded: mine also isn't.
Also, for what it's worth, I also don't think of myself as the kind of person to naturally gravitate towards the apocalypse/"saving the world" trope. From a purely narrative-aesthetic perspective, I much prefer the idea of building novel things, pioneering new frontiers, realizing the potential of humanity, etc, as opposed to trying to prevent disaster, reduce risk, etc. I am quite disappointed at reality for not conforming to my literary preferences.
It's interesting how people's responses can be so different here. I'm someone who gets pretty extreme anxiety from the x-risk stuff, at least when I'm not repressing those feelings.
Yep. That just means this wasn't written for you! I expect this wasn't written for a lot of (most?) people here.
I really wish that the post has been written in a way that let me figure out it wasn't for me sooner...
I think it would have saved a lot of time if the paragraph in bold had been at the top.
I came here to say something roughly like Jim's comment, but... I think what I actually want is grounding? Like, sure, you were playing the addictive fear game and now think you're out of it. But do you think I was? If you think there's something that differentiates people who are and aren't, what is it?
[Like, "your heart rate increases when you think about AI" isn't a definitive factor one way or another, but probably you could come up with a list of a dozen such indicators, and people could see which are true for them, and we could end up with population statistics.]
I think that at least the kinds of "Singularity-disrupted" people that Anna describes in "Reality-Revealing and Reality-Masking Puzzles" are in the fear game.
...Over the last 12 years, I’ve chatted with small hundreds of people who were somewhere “in process” along the path toward “okay I guess I should take Singularity scenarios seriously.” From watching them, my guess is that the process of coming to take Singularity scenarios seriously is often even more disruptive than is losing a childhood religion. Among many other things, I have seen it sometimes disrupt:
- People's belief that they should have rest, free time, some money/time/energy to spend on objects of their choosing, abundant sleep, etc.
- “It used to be okay to buy myself hot cocoa from time to time, because there used to be nothing important I could do with money. But now—should I never buy hot cocoa? Should I agonize freshly each time? If I do buy a hot cocoa does that mean I don’t care?”
- People's in-practice ability to “hang out”—to enjoy their friends, or the beach, in a “just being in the moment” kind of way.
- “Here I am at the beach like my to-do list told me to be, since I’m a good EA who is planning not to burn out. I’ve g
AFAICT from skimming, the object level of this post has a lot of overlap with my own algorithm. I limit engagement with x-risk to an amount that's healthy and sustainable for me. I keep non-x-risk clients in part to ground me in the real world. I'm into trauma processing and somatics. I think the fact that the people most scared of AGI risk are also the ones most scared of not developing AGI should raise some eyebrows. I treat "this feels bad" as a reason to stop without waiting for a legible justification.
And right now I'm using that last skill to not read this post. I wouldn't have even skimmed if I didn't think it was important to make this comment and have it not be totally uninformed. When I read this I feel awful, highly activated, and helpless/freeze response. It instills the same "you can't trust yourself, follow this rigidity" that it's trying to argue against.
You can't fight fire with fire, getting out of a tightly wound x-risk trauma spiral involves grounding and building trust in yourself, not being scared into applying the same rigidity in the opposite direction.
I agree, but how sure are we that it's actually a fact?
[EDITED to add:] One not-particularly-sinister-or-embarrassing possible explanation, if it is true, is that both are driven by a single underlying issue: how capable does any given person expect AGI to be? Imagine someone in WW2 thinking about whether to develop nuclear weapons. It seems plausible that { people who think it's super-vital to do it because whoever does it will win the war for sure } and { people who think it's super-dangerous to do it because these weapons could do catastrophic damage } might be roughly the same set of people.
There's a class of things that could be described as losing trust in yourself and in your ability to reason.
For a mild example, a friend of mine who tutors people in math recounts that many people have low trust in their ability to mathematical reasoning. He often asks his students to speak out loud while solving a problem, to find out how they are approaching it. And some of them will say something along the lines of, "well, at this point it would make the most sense to me to [apply some simple technique], but I remember that when our teacher was demonstrating how to solve this, he used [some more advanced technique], so maybe I should instead do that".
The student who does that isn't trusting that the algorithm of "do what makes the most sense to me" will eventually lead to the correct outcome. Instead, they're trying to replace it with "do what I recall an authority figure doing, even if I don't understand why".
Now it could be that the simple technique is wrong to apply here, and the more advanced one is needed. But if the student had more self-trust and tried the thing that made the most sense to them, then their attempt to solve the problem using the simple approach might help ...
…it's not reasonable to ask [Val] to step out of a conversation on his own post…
If it's understood that I'm not replying because otherwise the contribution won't happen at all rather than because I have nothing to say about it, then I'm fine stepping back and letting you clarify what you mean. If that helps.
Neither up- nor down-voted; seems good for many people to hear, but also is typical mind fallacying / overgeneralizing. There's multiple things happening on LW, some of which involve people actually thinking meaningfully about AI risk without harming anyone. Also, by the law of equal and opposite advice: you don't necessarily have to work out your personal mindset so that you're not stressed out, before contributing to whatever great project you want to contribute to without causing harm.
This is beautifully written and points at what I believe to be deep truths. In particular:
Your brilliant mind can create internal structures that might damn well take over and literally kill you if you don't take responsibility for this process. You're looking at your own internal AI risk.
...
Most people wringing their hands about AI seem to let their minds possess them more and more, and pour more & more energy into their minds, in a kind of runaway process that's stunningly analogous to uFAI.
But I won't say more about this right now, mostly because I don't think I can do it justice with the amount of time and effort I'm prepared to invest writing this comment. On that note, I commend your courage in writing and posting this. It's a delicate needle to thread between many possible expressions that could rub people the wrong way or be majorly misinterpreted.
Instead I'll say something critical and/or address a potential misinterpretation of your point:
What is this sobriety you advocate for?
I'm concerned that sobriety might be equivocated with giving in to the cognitive bias toward naive/consensus reality. In a sense of the word, that is what "sobriety" is: a balance of cognitive h...
What is this sobriety you advocate for?
Ah, I'm really glad you asked. I tried to define it implicitly in the post but I was maybe too subtle.
There's this specific engine of addiction. It's the thing that distracts without addressing the cause, and becomes your habitual go-to for dealing with the Bad Thing. That creates a feedback loop.
Sobriety is with respect to an addiction. It means dropping the distraction and facing & addressing the thing you'd been previously distracting yourself from, until the temptation to distract yourself extinguishes.
Alcohol being a seed example (hence "sobriety"). The engine of alcoholism is complex, but ultimately there's an underlying thing (sometimes biochemical, but very often emotional) that's a sensation the alcoholic's mind/body system has identified as "intolerable — to be avoided". Alcohol is a great numbing agent and can create a lot of unrelated sensations (like dizziness), but it doesn't address (say) feelings of inadequacy.
So getting sober isn't just a matter of "don't drink alcohol", but of facing the things that drive the impulse to reach for the bottle. When you extinguish the cause, the effect evaporates on its own — modulo habits.
I...
As a personal datapoint: I think the OPs descriptions have a lot in common with how I used to be operating, and that I think this would have been tremendously good advice for me personally, both in terms of its impact on my personal wellness and in terms of its impact on whether I did good-for-the-world things or harmful things.
(If it matters, I still think AI risk is a decent pointer at a thingy in the world that may kill everyone, and that this matters. The "get sober" thing is a good idea both in relation to that and broadly AFAICT.)
Interesting take. I haven't seen this happen on AI, but I do know two people who have an environmentalism fear spiral thing. My diagnosis was very different: I think the people I know actually have anxiety, or panic attacks or similar for mental health reasons. The environmentalism serves as camouflage. Thought 1: "Why am I depressed/anxious/whatever when things in my life are pretty good?" Then, instead of Thought 2 being "Maybe I should talk to a friend/do something that might cheer me up/see a doctor" they instead get thought 2 "Oh, its because humanity is going to destroy the world and everything will be awful. Man, its great that I am such a well-adjusted, big-picture, caring person that giant planet-scale forces that barely effect me personally have more impact on my emotional state than the actual day to day of my own life." Not only does the camo prevent them addressing the real problem (Ok, the environment is a real problem, but its not the only problem, and its not the problem they are suffering from at the moment), but it also weaponizes all kinds of media against themselves.
Upvoted because it's pointing at a real source of pain, and it's very good to talk about. But I suspect there's a lot of typical mind fallacy in the parts that sound more universal and less "here's what happened to and worked for me".
For me, I went through my doomsday worries in my teens and twenties, long before AI was anything to take seriously. Nuclear war or environmental collapse (or one causing the other) were assumed to be the forms of destruction to expect. Over the course of a decade or two, I was able to accept that, for me, "memento mori" was the root of the anxiety. I don't want to die, and I probably will anyway. There may be no actual outside meaning to my life, or by extension to anyone else's. And that doesn't prevent me from caring about other people (both individuals and groups, though not by any means equally), nor about my own experiences. These are important, even if they're only important to me (and, I hope, to some other humans).
I lived in "nihilistic materialist hell" from the ages of 5 (when it hit me what death meant) and ~10. It -- belief in the inevitable doom of myself and everyone I cared for and ultimately the entire universe to heat death -- was at times directly apprehended and completely incapacitating, and otherwise a looming unendurable awareness which for years I could only fend off using distraction. There was no gamemaster. I realized it all myself. The few adults I confided in tried to reassure me with religious and non-religious rationalizations of death, and I tried to be convinced but couldn't. It was not fun and did not feel epic in the least, though maybe if I'd discovered transhumanism in this period it would've been a different story.
I ended up getting out of hell mostly just by developing sufficient executive function to choose not to think of these things, and eventually to think of them abstractly without processing them as real on an emotional level.
Years later, I started actually trying to do something about it. (Trying to do something about it was my first instinct as well, but as a 5 yo I couldn't think of anything to do that bought any hope.)
But I think the machinery I...
I think this essay is blatantly manipulative bullshit written in a deliberately hypnotic style, that could be modified to target any topic anyone cares about.
It does strike me as a rather fully general counterargument, written in a deliberately obfuscatory/"woo" style. The focus on "listening to your body" seems like an obfuscation, it's an appeal to something deliberately put beyond measurement. This does seem like it could apply to anything anyone cares about (you're a Red Sox fan? You're addicted to the suffering, your body is telling you to stop, land on Earth and get sober!). If you have any reasons to disagree, that's coming from a place of addiction and you need to stop caring and presumably follow a similar life-path to OP because that is the only thing that works, everything else is a death-cult.
I don't buy it, to say the least, and I think it's only the social connections that people have to the OP that make anyone treat it charitably. People have been saying this since the earliest days of the discussion of this topic on the Internet; this fully general counterargument predates Eliezer Yudkowsky being appropriately pessimistic about AI.
I also think that the characterization that all rationalism comes from "disembodiment" is essentially an ableist slur. Using ableist slurs and appealing to t...
A helpful tool on the way to landing and getting sober is exercise. Exercise is essentially a displacement, like any of the other addictions, but it has the unique and useful feature that it processes out your chemicals, leaving you with less stress chemicals in circulation, and a refractory period before your body can make more.
Almost no matter your physical capabilities, there is something you can go do that makes you sweat and tires you out... and breaks the stress-focus-stress-focus cycle.
Edit: btw, this is great stuff, very good for this community to name it and offer a path away.
Related, but addressing a very different side of the AI risk mindset: https://idlewords.com/talks/superintelligence.htm
Poll: Does your personal experience resonate with what you take Val to be pointing at in this post?
Options are sub-comments of this parent.
Please vote by agreeing, not upvoting, with the answer that feels right to you. Please don't click the disagree button for options you disagree with, so that we can easily tabulate numbers by checking how many people have voted.
(Open to suggestions for better ways to set up polls, for the future.)
Personally, I sometimes have the opposite metacognitive concern: that I'm not freaking out enough about AI risk. The argument goes: if I don't have a strong emotional response, doesn't it mean I'm lying to myself about believing that AI risk is real? I even did a few exercises in which I tried to visualize either the doom or some symbolic representation of the doom in order to see whether it triggers emotion or, conversely, exposes some self-deception, something that rings fake. The mental state that triggered was interesting, more like a feeling of calm meditative sadness than panic. Ultimately, I think you're right when you say, if something doesn't threaten me on the timescale of minutes, it shouldn't send me into fight-or-flight. And, it doesn't.
I also tentatively agree that it feels like there's something unhealthy in the panicky response to Yudkowsky's recent proclamation of doom, and it might lead to muddled thinking. For example, it seems like everyone around here are becoming convinced of shorter and shorter timelines, without sufficient evidence IMO. But, I don't know whether your diagnosis is correct. Most of the discourse about AI risk around here is not producing any real progress on the problem. But, occasionally it does. And I'm not sure whether the root of the problem is psychological/memetic (as you claim) or just that it's a difficult problem that only a few can meaningfully contribute to.
I am very conflicted about this post.
On the one hand it deeply resonates with my own observations. Many of my friends from the community seem to be stuck on the addictive loop of proclaiming the end of the world every time a new model comes out. I think it's even more dangerous, as it becomes a social activity: "I am more worried than you about the end of the world, because I am smarter/more agentic than you, and I am better at recognizing the risk that this represents for our tribe." gets implicitly tossed around in a cycle where the members keep trying to one-up each other. This only ends when their claims get so absurd as to say the world will end next month, but even this absurdity seems to keep getting eroded over time.
Like someone else said here in the comments, if was reading about this issue in some unrelated doomsday cult from a book, I would immediately dismiss them as a bunch of lunatics. "How many doomsday cults have existed in history? Even if yours is based on at least some solid theoretical foundations, what happened to the previous thousands of doomsday cults that also were, and were wrong?"
On the other hand I have to admit that the arguments in your pos...
Thank you so much for writing this. I wish I had this in 2018 when I was spiraling really badly. I feel like I only managed to escape from the game by sheer luck and it easily could have killed me, hell it HAS killed people. Not everyone manages to break in a way that breaks them out of game and not just obliterate them.
I wrote a story about my attempts to process through a lot of this earlier this year
https://voidgoddess.org/2022/11/15/halokilled/
While some people might be doing intense thinking / writing, others like myself are distracting themselves via intense listening/perceiving/reading --- covering up their own thoughts and cares by taking in lots of information and sedating/overwhelming their emotions.
This seems... testable? Like, it's kind of the opposite message of Yudkowsky's "try harder" posts.
Have two groups work on a research problem. One is in doom mode, one is in sober mode. See which group makes more progress.
Uh, no.
Maybe I just genuinely care about not having terrible things happen to me and everyone else in the world? There's no game there, no broken addiction mechanisms inside.
I strong-downvoted this. [edit: I removed a statement about my feelings in reaction to this, that I feel was a little too much]
I just want to do what I can to keep the people I love from dying.
I think your diagnosis of the problem is right on the money, and I'm glad you wrote it.
As for your advice on what a person should do about this, it has a strong flavor of: quit doing what you're doing and go in the opposite direction. I think this is going to be good for some people but not others. Sometimes it's best to start where you are. Like, one can keep thinking about AI risk while also trying to become more aware of the distortions that are being introduced by these personal and collective fear patterns.
That's the individual level though, and...
I stand by what I said here: this post has a good goal but the implementation embodies exactly the issue it's trying to fight.
Valentine wrote an important message in a metaphorical language that will rub some people the wrong way (that includes me), but it seems like the benefit for those who need to hear it may exceed the annoyance of those who don't. Please let's accept it this way, and not nitpick the metaphors.
As a boring person, I would prefer to have a boring summary on the top, or maybe something like this:
If X is freaking you out, it is a fact about you, not about X. Read how this applies to the topic "AI will kill you"...
The longer boring version is the following: Human ...
I'm very conflicted about this post. On the one hand, many of it's parts are necessary things for LWers to hear, and I'm getting concerned about the doom loop that seems to form a cult-like mentality on AI.
On the other hand, it also has serious issues in it's framing, and I'm worried that the post is coming out of a mentality that isn't great as well.
Great post, thanks for writing it.
I try to reward posts I like with thoughtful commentary/disagreement, but there's a sense in which this post doesn't want to continue an existing spiraling thought pattern, it wants me to go out and do whatever I want to after putting that down.
After reading the other comments, I'll at least add in the datapoint that I have experienced a ton of "ruminating-about-AI-risk-strategy-as-escapism" in my life, and being able to not do that has been a pretty key step in actually making progress on the problem.
When I remember back to those times when I was trapped in it (not saying I don't still indulge from time to time), I think I would have found this post quite scary to engage with, because a lot of my social security was wrapped up in being the sort of person who would do that. I would be socially scared to put it down.
My solution was very rarely to introspect on it and fight the fight directly, as I feel like is a likely takeaway from this post; that's something I could only do when the force was weak and rival forces were strong. I think a basic element involved me becoming more socially stable in other ways. I think another basic element was noticing that my overall life strategy wasn't working and was instead hurting me. I took some more hardline strategies to deal with that, (more like Odysseus tying himself to the mast than Odysseus coming to internal peace with his struggle), and then I practiced other modes of b...
So on net, globally, I think it's actually worthwhile to let some potential Olympic athletes fail to realize their potential if it means we collectively have more psychic breathing room.
And AFAICT, getting more shared breathing room is the main hope we have for addressing the real thing.
I think this is your most general and surprising claim, and I'll hereby encourage you to write a post presenting arguments for it (ideally in a different style to the mildly pschyoactive post above, but not necessarily). I'm not sure to what extent I agree with your claim (I currently veer from 20% to 80% as I think about it) and I have some hope that if you wrote out some of the reasons that led to you believing it, it would help me make up my own mind a bit better.
A summarization of the above in a way easier to evaluate would be helpful. Richard's comment does this in part, but there may be more in the post not covered by the comment.
I would usually assume a post written like this has little value to be mined, but others in comments and in upvote/downvote counts seem to disagree.
There seems to be some real wisdom in this post but given the length and title of the post, you haven't offered much of an exit -- you've just offered a single link to a youtube channel for a trauma healer. If what you say here is true, then this is a bit like offering an alcoholic friend the sum total of one text message containing a single link to the homepage of alcoholics anonymous -- better than nothing, but not worthy of the bombastic title of this post.
The truly interesting thing here is that I would agree unequivocally with you if you were talking about any other kind of 'cult of the apocalypse'.
These cults don't have to be based on religious belief in the old-fashioned sense, in fact, most cults of this kind that really took off in the 20th and 21st century are secular.
Since around the late 1800s, there has been a certain type of student that externalizes their (mostly his) unbearable pain and dread, their lack of perspective and meaning in life into 'the system', and throw themselves into the noble ca...
Thank you for writing this Valentine, It is an important message and I am really glad somone is saying it.
I first got engaged with the community when i was in vulnurable life circumstances, and suffered major clinical distress fixated around many of the ideas encountered here.
To be clear I am not saying rationalist culture was the cause of my distress, it was not. I am sharing my subjective experience that when you are silently screaming in internal agony, some of the ideas in this community can serve as a catalyst for a psychotic breakdown.
I'm a little surprised that doomerism could take off like this, dominate one's thoughts, and yet fail to create resentment and anger toward of its apparent cause source. Is that something that was absent for you or was it not relevant to discuss here?
I wonder:
How bad are things, really? I'm not part of EA/Rat/AI risk IRL, so I dont have first-hand experience. Are people actually having mental breakdowns over the control problem? Some of the comments here seem to imply that people are actually experiencing depersonalization and anxiety so bad it's affecting their work performance specifically because of AI concerns. And not just them, but multiple people they work with. Is the culture at AI alignment orgs really that bad?
I think this is maybe implied by Aiyen's comment, but to highlight an element here:
This way of thinking doesn't have you trade sanity for slight xrisk decrease.
It has you trading sanity for perceived slight xrisk decreases.
If you start making those trades, over time your ability to perceive what's real decays.
If there's anything that has anything whatsoever like agency and also benefits from you feeding your sanity into a system like this, it'll exploit you right into a Hell of your own making.
Any time you're caught in a situation that asks for you to make a trade like this, it's worth asking if you've somehow slipped into the thrall of an influence like this.
It's way more common than you might think at first.
I agree with everything, though this is just a very long way to say the https://en.wikipedia.org/wiki/Serenity_Prayer
I think you are pointing at an important referent. There are probably a lot of people who will benefit from reading this post and thus I'm glad that you wrote it. That said, you appear to have written it in a deliberately confusing manner. You probably have your reasons. Maybe you believe that this way would be better for the people you are trying to help. I'm not an expert in Lacanianism, but I think this is wrong, both ethically and epistemologically. The are also a lot of people who will misunderstand this post incorrectly and for whom reading it will c...
Strongly upvoted, I think that the point about emotionally charged memeplexes distorting your view of the world is very valuable.
Ideally reviews would be done by people who read the posts last year, so they could reflect on how their thinking and actions changed. Unfortunately, I only discovered this post today, so I lack that perspective.
Posts relating to the psychology and mental well being of LessWrongers are welcome and I feel like I take a nugget of wisdom from each one (but always fail to import the entirety of the wisdom the author is trying to convey.)
The nugget from "Here's the exit" that I wish I had read a year ago is "If your body's emergency mobilization sys...
I immediately recognize the pattern that's being playing out in this post and in the comments. I've seen it so many times, in so many forms.
Some people know the "game" and the "not-game", because they learned the lesson the hard way. They nod along, because to them it's obvious.
Some people only know the "game". They think the argument is about "game" vs "game-but-with-some-quirks", and object because those quirks don't seem important.
Some people only know the "not-game". They think the argument is about "not-game" vs "not-game-but-with-some-quirks", and ob...
Thank you for writing this post, I think this is a useful framing of this problem. For me personally, the doom game is fun, imho I have more motivation to do things and I become more self-confident. (if it ends what worse could happen) But that's for me, with my socially isolated Math/ComSci/CosHo background.
For others, I don't think it's a good game. I kinda noticed the tons of psychotic breakdowns around the field and, like, that's bad, but I could not have articulated why it was bad.
And even for me, I might kinda overshoot with the whole information hazard share-or-not thinking. It's better if you're in charge of the game and not let the doom game play you.
Strong agree with TekhneMakre's comment.
Purely on Valentine's own professed standards: Of all the ways to "snap someone out of it", why pick one that seems the most like brainwashing? If the FBI needs to un-brainwash a dangerous cult member, do they gaslight them? Do they do a paternalistic "if you feel angry, that means I'm right" maneuver? Do they say "well I'm not too concerned if you think I'm right" to the patient?
(Also... FWIW, the most doomsday-cultish emotionally-fraught posts I've seen in the rationality community are, by percentage of posts, most...
I very much agree that actions motivated by fear tend to have bad outcomes. Fear has subtle influence (especially if unconscious) on what types of thoughts we have and as a consequence, to what kinds of solutions we eventually arrive.
And I second the observation that many people working on AI risk seem to me motivated by fear. I also see many AI risk researchers, who are grounded, playful, and work on AI safety not because they think they have to, but because they simply believe it's the best thing they can do. I wish there would be more of the latter, bec...
I think this post is a good one, even though personally I'm not hung up on AI doom; I think this area of research is cool and interesting, which is a rather different emotion from fear.
My immediate thought is that Cognitive Behevioral Therapy concepts might relevant here, as it sounds like a member of the family of anxiety disorders that CBT is designed to treat.
And also, given this a group phenomenon rather than a purely individual one, there's something of the apocalyptic religious cult dynamic going on.
one thing that can be kind of irritating about CBT ...
If your body's emergency mobilization systems are running in response to an issue, but your survival doesn't actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately.
You are locked in a room. You are going die of thirst in a few days. The door has a combination lock. You know the password is 5 digits (0 to 9). If it takes you one second to try each combination, it's going to take you 27.7 hours to try all the combinations (so half that on average to find the right one). Your survival doesn't depend on your action...
Unless I'm very much mistaken, emergency mobilization systems refers to autonomic responses like a pounding heartbeat, heightened subjective senses, and other types of physical arousal; i.e. the things your body does when you believe someone or something is coming to kill you with spear or claw. Literal fight or flight stuff.
In both examples you give there is true danger, but your felt bodily sense doesn't meaningfully correspond to it; you can't escape or find the bomb by being ready for an immediate physical threat. This is the error being referred to. In both cases the preferred state of mind is resolute problem-solving and inability to register a felt sense of panic will likely reduce your ability to get to such a state.
This short post is astounding because it succinctly describes, and prescribes, how to pay attention, to become grounded when a smart and sensitive human could end up engulfed in doom. The post is insightful and helpful to any of us in search of clarity and coping.
(…although I doubt he consciously intended it that way!)
I'm pretty sure Eliezer's "Death With Dignity" post was an April Fool's joke.
This is the basic core of addiction. Addictions are when there's an intolerable sensation but you find a way to bear its presence without addressing its cause. The more that distraction becomes a habit, the more that's the thing you automatically turn to when the sensation arises. This dynamic becomes desperate and life-destroying to the extent that it triggers a red queen race.
I doubt that addiction requires some intolerable sensation that you need to drown out. I'm pretty confident its mostly a habits/feedback loops and sometimes physical dependence.
I think calling things a 'game' makes sense to lesswrongers, but just seems unserious to non lesswrongers.
Look, I can go into mania like anyone else here probably can. My theories say that can't be genius level without it and that it comes with emotional sensitivity as well. Of course if you don't believe you have empathy, you won't, but you still have it.
I am not an AI doom and gloomiest. I adhere to Gödel, to Heisenberg, and to Georgeff. And since we haven't solved the emotional / experience part of AI, there is no way it can compete with humans creatively, period. Faster, yes. Better, no. Objectively better. not at all.
However, if my theory of t...
There's a kind of game here on Less Wrong.
It's the kind of game that's a little rude to point out. Part of how it works is by not being named.
Or rather, attempts to name it get dissected so everyone can agree to continue ignoring the fact that it's a game.
So I'm going to do the rude thing. But I mean to do so gently. It's not my intention to end the game. I really do respect the right for folk to keep playing it if they want.
Instead I want to offer an exit to those who would really, really like one.
I know I really super would have liked that back in 2015 & 2016. That was the peak of my hell in rationalist circles.
I'm watching the game intensify this year. Folk have been talking about this a lot. How there's a ton more talk of AI here, and a stronger tone of doom.
I bet this is just too intense for some folk. It was for me when I was playing. I just didn't know how to stop. I kind of had to break down in order to stop. All the way to a brush with severe depression and suicide.
And it also ate parts of my life I dearly, dearly wish I could get back.
So, in case this is audible and precious to some of you, I'd like to point a way to ease.
The Apocalypse Game
The upshot is this:
You have to live in a kind of mental illusion to be in terror of the end of the world.
Illusions don't look on the inside like illusions. They look like how things really are.
Part of how this one does the "daughter's arm" thing is by redirecting attention to facts and arguments.
None of this is relevant.
I'm pointing at something that comes before these thoughts. The thing that fuels the fixation on the worldview.
I also bet this is the thing that occasionally drives some people in this space psychotic, depressed, or into burnout.
The basic engine is:
In this case, the search for truth isn't in service to seeing reality clearly. The logic of economic races to the bottom, orthogonality, etc. might very well be perfectly correct.
But these thoughts are also (and in some cases, mostly) in service to the doomsday meme's survival.
But I know that thinking of memes as living beings is something of an ontological leap in these parts. It's totally compatible with the LW memeplex, but it seems to be too woo-adjacent and triggers an unhelpful allergic response.
So I suggested a reframe at the beginning, which I'll reiterate here:
Your body's fight-or-flight system is being used as a power source to run a game, called "OMG AI risk is real!!!"
And part of how that game works is by shoving you into a frame where it seems absolutely fucking real. That this is the truth. This is how reality just is.
And this can be fun!
And who knows, maybe you can play this game and "win". Maybe you'll have some kind of real positive impact that matters outside of the game.
But… well, for what it's worth, as someone who turned off the game and has reworked his body's use of power quite a lot, it's pretty obvious to me that this isn't how it works. If playing this game has any real effect on the true world situation, it's to make the thing you're fearing worse.
(…which is exactly what's incentivized by the game's design, if you'll notice.)
I want to emphasize — again — that I am not saying that AI risk isn't real.
I'm saying that really, truly orienting to that issue isn't what LW is actually about.
That's not the game being played here. Not collectively.
But the game that is being played here absolutely must seem on the inside like that is what you're doing.
Ramping Up Intensity
When Eliezer rang the doom bell, my immediate thought was:
I mean this with respect and admiration. It's very skillful. Eliezer has incredible mastery in how he weaves terror and insight together.
And I don't mean this at all to dismiss what he's saying. Though I do disagree with him about overall strategy. But it's a sincere disagreement, not a "Oh look, what a fool" kind of thing.
What I mean is, it's a masterful move of making the game even more awesome.
(…although I doubt he consciously intended it that way!)
I remember when I was in the thick of this AI apocalypse story, everything felt so… epic. Even questions of how CFAR dealt with garbage at its workshops seemed directly related to whether humanity would survive the coming decades. The whole experience was often thrilling.
And on the flipside, sometimes I'd collapse. Despair. "It's too much" or "Am I even relevant?" or "I think maybe we're just doomed."
These are the two sort of built-in physiological responses to fight-or-flight energy: activation, or collapse.
(There's a third, which is a kind of self-holding. But it has to be built. Infants aren't born with it. I'll point in that direction a bit later.)
In the spirit of feeling rationally, I'd like to point out something about this use of fight-or-flight energy:
If your body's emergency mobilization systems are running in response to an issue, but your survival doesn't actually depend on actions on a timescale of minutes, then you are not perceiving reality accurately.
Which is to say: If you're freaked out but rushing around won't solve the problem, then you're living in a mental hallucination. And it's that hallucination that's scaring your body.
Again, this isn't to say that your thoughts are incorrectly perceiving a future problem.
But if it raises your blood pressure or quickens your breath, then you haven't integrated what you're seeing with the reality of your physical environment. Where you physically are now. Sitting here (or whatever) reading this text.
So… folk who are wringing their hands and feeling stressed about the looming end of the world via AI?
Y'all are hallucinating.
If you don't know what to do, and you're using anxiety to power your minds to figure out what to do…
…well, that's the game.
The real thing doesn't work that way.
But hey, this sure is thrilling, isn't it?
As long as you don't get stuck in that awful collapse space, or go psychotic, and join the fallen.
But the risk of that is part of the fun, isn't it?
(Interlude)
A brief interlude before I name the exit.
I want to emphasize again that I'm not trying to argue anyone out of doing this intense thing.
The issue is that this game is way, way out of range for lots of people. But some of those people keep playing it because they don't know how to stop.
And they often don't even know that there's something on this level to stop.
You're welcome to object to my framing, insist I'm missing some key point, etc.
Frankly I don't care.
I'm not writing this to engage with the whole space in some kind of debate about AI strategy or landscape or whatever.
I'm trying to offer a path to relief to those who need it.
That no, this doesn't have to be the end of the world.
And no, you don't have to grapple with AI to sort out this awful dread.
That's not where the problem really is.
I'm not interested in debating that. Not here right now.
I'm just pointing out something for those who can, and want to, hear it.
Land on Earth and Get Sober
So, if you're done cooking your nervous system and want out…
…but this AI thing gosh darn sure does look too real to ignore…
…what do you do?
My basic advice here is to land on Earth and get sober.
The thing driving this is a pain. You feel that pain when you look out at the threat and doom of AI, but you cover it up with thoughts. You pretend it's about this external thing.
I promise, it isn't.
I know. I really do understand. It really truly looks like it's about the external thing.
But… well, you know how when something awful happens and gets broadcast (like the recent shooting), some people look at it with a sense of "Oh, that's really sad" and are clearly impacted, while others utterly flip their shit?
Obviously the difference there isn't in the event, or in how they heard about it. Maybe sometimes, but not mostly.
The difference is in how the event lands for the listener. What they make it mean. What bits of hidden pain are ready to be activated.
You cannot orient in a reasonable way to something that activates and overwhelms you this way. Not without tremendous grounding work.
So rather than believing the distracting thoughts that you can somehow alleviate your terror and dread with external action…
…you've got to stop avoiding the internal sensation.
When I talked earlier about addiction, I didn't mean that just as an analogy. There's a serious withdrawal experience that happens here. Withdrawal from an addiction is basically a heightening of the intolerable sensation (along with having to fight mechanical habits of seeking relief via the addictive "substance").
So in this case, I'm talking about all this strategizing, and mental fixation, and trying to model the AI situation.
I'm not saying it's bad to do these things.
I'm saying that if you're doing them as a distraction from inner pain, you're basically drunk.
You have to be willing to face the awful experience of feeling, in your body, in an inescapable way, that you are terrified.
I sort of want to underline that "in your body" part a bazillion times. This is a spot I keep seeing rationalists miss — because the preferred recreational drug here is disembodiment via intense thinking. You've got to be willing to come back, again and again, to just feeling your body without story. Notice how you're looking at a screen, and can feel your feet if you try, and are breathing. Again and again.
It's also really, really important that you do this kindly. It's not a matter of forcing yourself to feel what's present all at once. You might not even be able to find the true underlying fear! Part of the effect of this particular "drug" is letting the mind lead. Making decisions based on mental computations. And kind of like minds can get entrained to porn, minds entrained to distraction via apocalypse fixation will often hide their power source from their host.
(In case that was too opaque for you just yet, I basically just said "Your thoughts will do what they can to distract you from your true underlying fear." People often suddenly go blank inside when they look inward this way.)
So instead of trying to force it all at once, it's a matter of titrating your exposure. Noticing that AI thoughts are coming up again, and pausing, and feeling what's going on in your body. Taking a breath for a few seconds. And then carrying on with whatever.
This is slow work. Unfortunately your "drug" supply is internal, so getting sober is quite a trick.
But this really is the exit. As your mind clears up… well, it's very much like coming out of the fog of a bender and realizing that no, really, those "great ideas" you had just… weren't great. And now you're paying the price on your body (and maybe your credit card too!).
There are tons of resources for this kind of direction. It gets semi-independently reinvented a lot, so there are lots of different names and frameworks for this. One example that I expect to be helpful for at least some LWers who want to land on Earth & get sober is Irene Lyon, who approaches this through a "trauma processing" framework. She offers plenty of free material on YouTube. Her angle is in the same vein as Gabor Maté and Peter Levine.
But hey, if you can feel the thread of truth in what I'm saying and want to pursue this direction, but you find you can't engage with Irene Lyon's approach, feel free to reach out to me. I might be able to find a different angle for you. I want anyone who wants freedom to find it.
But… but Val… what about the real AI problem?!
Okay, sure. I'll say a few words here.
…although I want to point out something: The need to have this answered is coming from the addiction to the game. It's not coming from the sobriety of your deepest clarity.
That's actually a complete answer, but I know it doesn't sound like one, so I'll say a little more.
Yes, there's a real thing.
And yes, there's something to do about it.
But you're almost certainly not in a position to see the real thing clearly or to know what to do about it.
And in fact, attempts to figure the real thing out and take action from this drunk gamer position will make things worse.
(I hesitate to use the word "worse" here. That's not how I see it. But I think that's how it translates to the in-game frame.)
This is what Buddhists should have meant (and maybe did/do?) when they talk about "karma". How deeply entangled in this game is your nervous system? Well, when you let that drive how you interact with others, their bodies get alarmed in similar ways, and they get more entangled too.
Memetic evolution drives how that entangling process happens on large scales. When that becomes a defining force, you end up with self-generating pockets of Hell on Earth.
This recent thing with FTX is totally an example. Totally. Threads of karma/trauma/whatever getting deeply entangled and knotted up and tight enough that large-scale flows of collective behavior create an intensely awful situation.
You do not solve this by trying harder. Tugging the threads harder.
In fact, that's how you make it worse.
This is what I meant when I said that actually dealing with AI isn't the true game in LW-type spaces, even though it sure seems like it on the inside.
It's actually helpful to the game for the situation to constantly seem barely maybe solvable but to have major setbacks.
And this really can arise from having a sincere desire to deal with the real problem!
But that sincere desire, when channeled into the Matrix of the game, doesn't have any power to do the real thing. There's no leverage.
The real thing isn't thrilling this way. It's not epic.
At least, not any more epic than holding someone you love, or taking a stroll through a park.
To oversimplify a bit: You cannot meaningfully help with the real thing until you're sober.
Now, if you want to get sober and then you roll up your sleeves and help…
…well, fuck yeah! Please. Your service would be a blessing to all of us. Truly. We need you.
But it's gotta come from a different place. Tortured mortals need not apply.
And frankly, the reason AI in particular looks like such a threat is because you're fucking smart. You're projecting your inner hell onto the external world. Your brilliant mind can create internal structures that might damn well take over and literally kill you if you don't take responsibility for this process. You're looking at your own internal AI risk.
I hesitate to point that out because I imagine it creating even more body alarm.
But it's the truth. Most people wringing their hands about AI seem to let their minds possess them more and more, and pour more & more energy into their minds, in a kind of runaway process that's stunningly analogous to uFAI.
The difference is, you don't have to make the entire world change in order to address this one.
You can take coherent internal action.
You can land on Earth and get sober.
That's the internal antidote.
It's what offers relief — eventually.
And from my vantage point, it's what leads to real hope for the world.