My point was that there would be no need to kill, say, the guy working in a textile factory. I know that probabilities of zero and one are not allowed, but I feel that I can safely round the chance that he will be directly involved in creating a UFAI to zero. I assume you agree that (negative utility produced by killing all people not working on FAI)>(negative utility produced by killing all people pursuing AGI that are not paying attention to Friendliness), so I think that you would want to take the latter option.
I did not claim that if I had the ability to eliminate all non-Friendly AGI projects I would not do so. (To remove the negatives, I believe that I would do so, subject to a large amount of further deliberation.)
I feel that I can safely round the chance that he will be directly involved in creating a UFAI to zero.
I would explain why I disagree with this, but my ultimate goal is not to motivate people to nuke China. My goal is more nearly opposite - to get people to realize that the usual LW approach has cast the problem in terms that logically justify killing most people. Once people realize that, they'll be more open to alternative ways of looking at the problem.
Having all known life on Earth concentrated on a single planet is an existential risk. So we should try to spread out, right? As soon as possible?
Yet, if we had advanced civilizations on two planets, that would be two places for unfriendly AI to originate. If, as many people here believe, a single failed trial ruins the universe, you want to have as few places trying it as possible. So you don't want any space colonization until after AI is developed.
If we apply that logic to countries, you would want as few industrialized nations as possible until AAI (After AI). So instead of trying to help Africa, India, China, and the Middle East develop, you should be trying to suppress them. In fact, if you really believed the calculations I commonly see used in these circles about the probability of unfriendly AI and its consequences, you should be trying to exterminate human life outside of your developed country of choice. Failing to would be immoral.
And if you apply it within the USA, you need to pick one of MIT and Stanford and Carnegie Mellon, and burn the other two to the ground.
Of course, doing this will slow the development of AI. But that's a good thing, if UFAI is most likely and has zero utility.
In fact, if slowing development is good, probably the best thing of all is just to destroy civilization and stop development completely.
Do you agree with any of this? Is there a point where you think it goes too far? If so, say where it goes too far and explain why.
I see two main flaws in the reasoning.
ADDED: A number of the comments so far imply that the first AI built will necessarily FOOM immediately. FOOM is an appealing argument. I've argued in favor of it myself. But it is not a theorem. I don't care who you are; you do not know enough about AI and its future development to bet the future of the universe on your intuition that non-FOOMing AI is impossible. You may even think FOOM is the default case; that does not make it the only case to consider. In this case, even a 1% chance of a non-foom AI, multiplied by astronomical differences in utility, could justify terrible present disutility.