Moderator: "In our televised forum, 'Moral problems of our time, as seen by dead people', we are proud and privileged to welcome two of the most important men of the twentieth century: Adolf Hitler and Mahatma Gandhi. So, gentleman, if you had a general autonomous superintelligence at your disposal, what would you want it to do?"
Hitler: "I'd want it to kill all the Jews... and humble France... and crush communism... and give a rebirth to the glory of all the beloved Germanic people... and cure our blond blue eyed (plus me) glorious Aryan nation of the corruption of lesser brown-eyed races (except for me)... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Gandhi: "I'd want it to convince the British to grant Indian independence... and overturn the cast system... and cause people of different colours to value and respect one another... and grant self-sustaining livelihoods to all the poor and oppressed of this world... and purge violence from the heart of men... and reconcile religions... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Moderator: "And if instead you had a superintelligent Oracle, what would you want it to do?"
Hitler and Gandhi together: "Stay inside the box and answer questions accurately".
Solving other x-risks will not save us from uFAI. Solving FAI will save us from other x-risks. Solving Oracle AI might save us from other x-risks. I think we should be working on both FAI and Oracle AI.
Good point. I will have to think about it further. Just a few thoughts:
Safe nanotechnology (unsafe nanotechnology being an existential risk) will also save us from various existential risks. Arguably less than a fully-fledged friendly AI. But assume that the disutility of both scenarios is about the same.
An evil AI (as opposed to an unfriendly AI) is as unlikely as a friendly AI. Both risks will probably simply wipe us out and don't cause extra disutility. If... (read more)