Having all known life on Earth concentrated on a single planet is an existential risk. So we should try to spread out, right? As soon as possible?
Yet, if we had advanced civilizations on two planets, that would be two places for unfriendly AI to originate. If, as many people here believe, a single failed trial ruins the universe, you want to have as few places trying it as possible. So you don't want any space colonization until after AI is developed.
If we apply that logic to countries, you would want as few industrialized nations as possible until AAI (After AI). So instead of trying to help Africa, India, China, and the Middle East develop, you should be trying to suppress them. In fact, if you really believed the calculations I commonly see used in these circles about the probability of unfriendly AI and its consequences, you should be trying to exterminate human life outside of your developed country of choice. Failing to would be immoral.
And if you apply it within the USA, you need to pick one of MIT and Stanford and Carnegie Mellon, and burn the other two to the ground.
Of course, doing this will slow the development of AI. But that's a good thing, if UFAI is most likely and has zero utility.
In fact, if slowing development is good, probably the best thing of all is just to destroy civilization and stop development completely.
Do you agree with any of this? Is there a point where you think it goes too far? If so, say where it goes too far and explain why.
I see two main flaws in the reasoning.
- Categorization of outcomes as "FAI vs UFAI", with no other possible outcomes recognized, and no gradations within the category of either, and zero utility assigned to UFAI.
- Failing to consider scenarios in which multiple AIs can provide a balance of power. The purpose of this balance of power may not be to keep humans in charge; it may be to put the AIs in an AI society in which human values will be worthwhile.
- ADDED, after being reminded of this by Vladimir Nesov: Re. the final point, stopping completely guarantees Earth life will eventually be eliminated; see his comment below for elaboration.
ADDED: A number of the comments so far imply that the first AI built will necessarily FOOM immediately. FOOM is an appealing argument. I've argued in favor of it myself. But it is not a theorem. I don't care who you are; you do not know enough about AI and its future development to bet the future of the universe on your intuition that non-FOOMing AI is impossible. You may even think FOOM is the default case; that does not make it the only case to consider. In this case, even a 1% chance of a non-foom AI, multiplied by astronomical differences in utility, could justify terrible present disutility.
The number of distinct locations that humans are active isn't what impacts the chance of uFAI arising but rather the number of people who are programming things which could potentially do so. How the people are spread out isn't very relevant if fooming (regardless of the exact definition of foom) is a serious worry.
Different muncipalities have different regulatory regimes and different attitudes. If the US develops a cautious approach to AI, and China has a "build it before the damn Yankees do" approach, that's significant.