Moderator: "In our televised forum, 'Moral problems of our time, as seen by dead people', we are proud and privileged to welcome two of the most important men of the twentieth century: Adolf Hitler and Mahatma Gandhi. So, gentleman, if you had a general autonomous superintelligence at your disposal, what would you want it to do?"
Hitler: "I'd want it to kill all the Jews... and humble France... and crush communism... and give a rebirth to the glory of all the beloved Germanic people... and cure our blond blue eyed (plus me) glorious Aryan nation of the corruption of lesser brown-eyed races (except for me)... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Gandhi: "I'd want it to convince the British to grant Indian independence... and overturn the cast system... and cause people of different colours to value and respect one another... and grant self-sustaining livelihoods to all the poor and oppressed of this world... and purge violence from the heart of men... and reconcile religions... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Moderator: "And if instead you had a superintelligent Oracle, what would you want it to do?"
Hitler and Gandhi together: "Stay inside the box and answer questions accurately".
I know and I didn't downvote your post either. I think it is good to stimulate more discussion about alternatives (or preliminary solutions) to friendly AI in case it turns out to be unsolvable in time.
The problem is that you appear to be saying that it would somehow be "safe". If you are talking about expert systems then it would presumably not be a direct risk but (if it is advanced enough to make real progress that humans alone can't) a huge stepping stone towards fully general intelligence. That means that if you target Oracle AI instead of friendly AI you will just increase the probability of uFAI.
Oracle AI has to be a last resort when the shit hits the fan.
(ETA: If you mean we should also work on solutions to keep a possible Oracle AI inside a box (a light version of friendly AI), then I agree. But one should first try to figure out how likely friendly AI is to be solved before allocating resources to Oracle AI.)
If we had infinite time, I'd agree with you. But I'm feeling that we have little chance of solving FAI before the shit indeed does hit the fan and us. The route safe Oracle -> Oracle asisted FAI design seems more plausible to me. Especially as we are so much better at correcting errors than preventing them, so a prediction Oracle (if safe) would play to our strengths.