Luke Muehlhauser writes:
Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. 1 However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.
Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. Then he says:
To an individual human, death by AI (or by climate catastrophe) is worse than old age "natural" death only to the extent that it comes sooner, and perhaps in being more violent. To someone who cares about others, the large number of looming deaths is pretty bad. To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
To someone who loves only abstract intelligence and quantifies by some metric I don't quite get, AI may be just as good as (or better than) people.