Luke Muehlhauser writes:
Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. 1 However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.
Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. Then he says:
The idea of a missing mood, from following the link to Bryan Caplan's article, seems to amount to two ideas:
These are, of course, two sides of the same coin and have the same problem: You're assuming that the first half of your position (costs in case 1, benefits in case 2) is not only correct, but so obviously correct that nobody can reasonably disagree with it; if someone acts as they don't believe it, there must be some other explanation. This is better than assuming your entire position is correct, but it's still poor epistemic hygeine. For instance, both the military hawks example (case 1) and the immigration example (case 2) fail if your opponent doesn't value non-Americans very much, so there are lower costs or benefits, respectively.
Beware of starting with disagreement and concluding insincerity.