In fact, if slowing development is good, probably the best thing of all is just to destroy civilization and stop development completely.
Possibly a good idea (when you put this as a Trolley problem, with the whole of future potential on the other side), but too difficult to implement in a way that gives advantage to future development of FAI (otherwise you just increase existential risk if civilization never recovers, or replay the same race as we face now).
Also, depending on temporal discounting, even a perfect plan that trades current humanity for future FAI with certainty could be incorrect, so we'd prefer to keep present humanity and reject the future FAI. If there's no discounting, then FAI is the better choice, but we don't really know.
Upvoted and I mostly agree, but there's one point I don't get. I though temporal discounting was considered a bias. Is it not necessarily one?
Having all known life on Earth concentrated on a single planet is an existential risk. So we should try to spread out, right? As soon as possible?
Yet, if we had advanced civilizations on two planets, that would be two places for unfriendly AI to originate. If, as many people here believe, a single failed trial ruins the universe, you want to have as few places trying it as possible. So you don't want any space colonization until after AI is developed.
If we apply that logic to countries, you would want as few industrialized nations as possible until AAI (After AI). So instead of trying to help Africa, India, China, and the Middle East develop, you should be trying to suppress them. In fact, if you really believed the calculations I commonly see used in these circles about the probability of unfriendly AI and its consequences, you should be trying to exterminate human life outside of your developed country of choice. Failing to would be immoral.
And if you apply it within the USA, you need to pick one of MIT and Stanford and Carnegie Mellon, and burn the other two to the ground.
Of course, doing this will slow the development of AI. But that's a good thing, if UFAI is most likely and has zero utility.
In fact, if slowing development is good, probably the best thing of all is just to destroy civilization and stop development completely.
Do you agree with any of this? Is there a point where you think it goes too far? If so, say where it goes too far and explain why.
I see two main flaws in the reasoning.
ADDED: A number of the comments so far imply that the first AI built will necessarily FOOM immediately. FOOM is an appealing argument. I've argued in favor of it myself. But it is not a theorem. I don't care who you are; you do not know enough about AI and its future development to bet the future of the universe on your intuition that non-FOOMing AI is impossible. You may even think FOOM is the default case; that does not make it the only case to consider. In this case, even a 1% chance of a non-foom AI, multiplied by astronomical differences in utility, could justify terrible present disutility.