I wonder whether “Person P is freaking out” might have mixed effects: Maybe for people who were previously inclined to like P and/or agree with P, it would move them in the right direction, and for people who were previously inclined to dislike P and/or disagree with P, it would move them in the wrong direction.
Like, I think that I feel disinclined to listen to (person who I think is unreasonable), but that I feel much more disinclined to listen to (person who I think is unreasonable and is being very rant-y and emotional about it).
It also depends on your target audience. (Which is basically what you said, just in slightly different words.) If you want to get Serious Researchers to listen to you and they aren't already within the sub-sub-culture that is the rationality community and its immediate neighbors, then in many (most?) cases ranting and freaking out is probably going to be actively counterproductive to your cause. Same if you're trying to build a reputation as a Serious Researcher, with a chance that decision makers who listen to Serious Researchers might listen to you. On the other hand, if your target audience is people who already trust you or who are already in your immediate sub-sub-tribe, and you don't mind risking being labeled a crackpot by the wider world, then I can see why visibly freaking out could be helpful.
[Also, it goes without saying that not everybody agrees with Eliezer's probability-of-doom estimates. Depending on your relative probabilities it might make perfect sense to work in a random startup, have a 401k, not visibly freak out, etc.]
If you flail because it convinces people how serious the problem is, then in the long term, you're Goodharting flailing; flailing ceases to be an indicator of how serious the problem is if you do it for strategic reasons.
Yeah, I think this is a large part of the problem with flailing. Also, the issue where you flail so much that you don't spend enough time seriously strategizing and acting on the object-level.
That said, I think the OP is a good argument that some versions of flailing can be good. In particular, I already thought that candor, blurting, and venting are good, and these can overlap with flailing.
Can confirm, Eliezer's recent posts (especially "AGI Ruin") have kinda "woken me up" to the urgency of the problem, and how far behind we are relative to where we should (could?) be.
- Comedy man on existential risk
CFAR has a notion of "flailing". Alone on a desert island, if you injure yourself, you'll probably think fast about how to solve the problem. Whereas injuring yourself around friends, you're more likely to "flail": you'll lean into things that demonstrate your pain/trouble to others.
Flailing can in fact be counterproductive. I will admit that if you're a MIRI engineer, and you start to flail in front of the other MIRI engineers, that is probably in poor taste. Everyone at MIRI already knows you are in trouble and is doing their best. Many of them are as scared as you are, and are trying to suppress their own instinct to flail.
But, just like crying on the floor, flailing does in fact serve legitimate personal and social purposes. People evolved shared abilities to signal distress for a reason. Before March of this year, on the rare occasion I saw the Utterly Horrifying Situation explicitly acknowledged, the guy on my laptop screen devoting his life to the problem would always follow by saying: "Don't Panic. Do not alert the other people next to you that you might be panicking if you are. Don't honestly convey how worried you are about the problem. Deliver your argument in reasoned, unemotional tones, like you are considering a hypothetical, not like you are explaining an asteroid is headed towards earth. If you must break down, do it quietly and in a separate room, because otherwise people might not think you're Rational and by extension your community is not Rational."
We are not on a desert island. A large fraction of the basic coordination problem stems from the complete absence of public and institutional understanding of how serious the situation has gotten. It clearly wouldn't solve everything, but a world in which most people understand that sans serious coordination it's going to end seems closer to organizing a solution than one where almost nobody does. And if you're a concerned citizen trying to alert others, you should realize there's a distinct problem with suppressing your emotions.
That problem is: people hear philosophical-sounding arguments as to why we might be doing something very bad all the time. Normies take a brief look at these arguments, and then check out the emotional state and behavior of the person delivering them. The way you act and speak telegraphs to the other person how to react to the problem and the degree to which you yourself care about it.
If you are a programmer at FaceGoog who really does sound like they read something very scary on the internet, but puts 40% of their income into their 401k, bystanders who know these facts about you will probably not do anything about your pet doomsday scenario either, no matter how solid your reasoning is, because you are not behaving how they'd expect someone concerned about doomsday to behave. If you're an AI safety researcher who has a job working on the problem but you explain it to others like you're reading a weather forecast, they will probably be even less inclined to believe you, because people in the Real World will assume you should have an emotional attachment to the significance of your pet cause and have had lots of time to justify it to yourself. "This just sounds like a LARP, why aren't you doing anything about it then" is by far the most common unmanageable reaction I get after an hour conversation trying to raise the alarm for people I know, and I have little to no good explanation except "I dunno, maybe I'm a terrible/illogical person".
In fact, since the "MIRI announces new "Death With Dignity" strategy" post, I've had a bit of time to reflect on why I've been working on a miscellaneous startup for years, even though I would've told anybody who asked me that the most likely outcome for myself is death before 30 years old, and why I suddenly now feel like I have to wake the fuck up. The reason is I've had these misgivings is probably because Eliezer started flailing. I don't think "Death with Dignity" is actually a helpful way, psychologically, of looking at the problem, but the post highlighted Eliezer's internal world model to me in a second-order way that flipped a switch. It was the way he communicated, how his tone was consistent with what my hindbrain expects from the alarmed, not the content, that started to get me to think more seriously about what on Earth I was trying to do with my life. Same thing with the AGI Ruin post. As Zvi puts it: "One could also propose making it not full of rants, but I don’t think that would be an improvement. The rants are important. The rants contain data. They reveal Eliezer’s cognitive state and his assessment of the state of play. Not ranting would leave important bits out and give a meaningfully misleading impression."
There's probably a line beyond which panicking or showing too much emotion about how the world is ending makes you look crazy, and where that line is placed is certainly different depending on how well you personally know whoever you're talking to. However, for this particular issue it seems very rare, in practice, for people to cross that line. On the contrary, it seems like most people pretend to be calm or evade the actual subject of pending doom to a point that completely deflates their attempt at mobilizing or convincing others. Eliezer@pre-2022 will give a completely dry explanation of the problem or the factors surrounding the problem, and no matter how safely inside the Overton window he tries to be, anybody who dislikes him will just make up some psychoanalytic nonsense about how he's doing it because he wants to validate the importance of his IQ. So unless you sound incoherent to the people who were going to believe you anyways, I think you should be honestly expressing how you feel and the degree to which the problem concerns you. If you believe with 90% probability everyone is going to die in the next fifteen years, no one seems to understand that after talking with you about the problem, and it's not your deliberate intention to hide your beliefs, you're not being explicit enough.