Having all known life on Earth concentrated on a single planet is an existential risk. So we should try to spread out, right? As soon as possible?
Yet, if we had advanced civilizations on two planets, that would be two places for unfriendly AI to originate. If, as many people here believe, a single failed trial ruins the universe, you want to have as few places trying it as possible. So you don't want any space colonization until after AI is developed.
If we apply that logic to countries, you would want as few industrialized nations as possible until AAI (After AI). So instead of trying to help Africa, India, China, and the Middle East develop, you should be trying to suppress them. In fact, if you really believed the calculations I commonly see used in these circles about the probability of unfriendly AI and its consequences, you should be trying to exterminate human life outside of your developed country of choice. Failing to would be immoral.
And if you apply it within the USA, you need to pick one of MIT and Stanford and Carnegie Mellon, and burn the other two to the ground.
Of course, doing this will slow the development of AI. But that's a good thing, if UFAI is most likely and has zero utility.
In fact, if slowing development is good, probably the best thing of all is just to destroy civilization and stop development completely.
Do you agree with any of this? Is there a point where you think it goes too far? If so, say where it goes too far and explain why.
I see two main flaws in the reasoning.
- Categorization of outcomes as "FAI vs UFAI", with no other possible outcomes recognized, and no gradations within the category of either, and zero utility assigned to UFAI.
- Failing to consider scenarios in which multiple AIs can provide a balance of power. The purpose of this balance of power may not be to keep humans in charge; it may be to put the AIs in an AI society in which human values will be worthwhile.
- ADDED, after being reminded of this by Vladimir Nesov: Re. the final point, stopping completely guarantees Earth life will eventually be eliminated; see his comment below for elaboration.
ADDED: A number of the comments so far imply that the first AI built will necessarily FOOM immediately. FOOM is an appealing argument. I've argued in favor of it myself. But it is not a theorem. I don't care who you are; you do not know enough about AI and its future development to bet the future of the universe on your intuition that non-FOOMing AI is impossible. You may even think FOOM is the default case; that does not make it the only case to consider. In this case, even a 1% chance of a non-foom AI, multiplied by astronomical differences in utility, could justify terrible present disutility.
Upvoted for making me think.
I agree with JoshuaZ's post in that the probability of UFAI creation will increase with the number of people trying to create AGI without concern for Friendliness, and that this is a much better measure of such than the number of locations at which such research takes place.
The world would probably stand a better chance without certain AGI projects, but I don't think that effort put into dismantling such is nearly as efficient as putting effort towards FAI (considering that a FOOM-ing FAI will probably be able to stop future UFAI), especially considering current laws etc. By the way, I don't see why you're talking about eliminating countries and such. People that are not working on AGI have a very low likelyhood of creating UFAI, so I think you would just want to target the projects.
You seem to be using zero utility like I would use 'infinite negative utility.' To me, zero utility means that I don't care in the slightest whether happens or not. With that said, I don't assign infinite negative utility to anything (primarily because it causes my brain to bug out), so the probability of something happening still has a significant effect on the expected utility.
Would you say China has a less than 10^-20 probability of developing UFAI? Or would you assign the utility of the entire future of the roughly 10^23 stars in the universe for the next 10^10 years to be less than 10^20 times the utility of life in China today? You must pick one (modulo time discounting), if you're working within the generic LW existential-risk long-future big-universe scenario.