Moderator: "In our televised forum, 'Moral problems of our time, as seen by dead people', we are proud and privileged to welcome two of the most important men of the twentieth century: Adolf Hitler and Mahatma Gandhi. So, gentleman, if you had a general autonomous superintelligence at your disposal, what would you want it to do?"
Hitler: "I'd want it to kill all the Jews... and humble France... and crush communism... and give a rebirth to the glory of all the beloved Germanic people... and cure our blond blue eyed (plus me) glorious Aryan nation of the corruption of lesser brown-eyed races (except for me)... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Gandhi: "I'd want it to convince the British to grant Indian independence... and overturn the cast system... and cause people of different colours to value and respect one another... and grant self-sustaining livelihoods to all the poor and oppressed of this world... and purge violence from the heart of men... and reconcile religions... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Moderator: "And if instead you had a superintelligent Oracle, what would you want it to do?"
Hitler and Gandhi together: "Stay inside the box and answer questions accurately".
If we had infinite time, I'd agree with you. But I'm feeling that we have little chance of solving FAI before the shit indeed does hit the fan and us. The route safe Oracle -> Oracle asisted FAI design seems more plausible to me. Especially as we are so much better at correcting errors than preventing them, so a prediction Oracle (if safe) would play to our strengths.
If I assume a high probability of risks from AI and a short planning horizon then I agree. But it is impossible to say. I take the same stance as Holden Karnofsky from GiveWell regarding the value of FAI research at this point:
... (read more)