Moderator: "In our televised forum, 'Moral problems of our time, as seen by dead people', we are proud and privileged to welcome two of the most important men of the twentieth century: Adolf Hitler and Mahatma Gandhi. So, gentleman, if you had a general autonomous superintelligence at your disposal, what would you want it to do?"
Hitler: "I'd want it to kill all the Jews... and humble France... and crush communism... and give a rebirth to the glory of all the beloved Germanic people... and cure our blond blue eyed (plus me) glorious Aryan nation of the corruption of lesser brown-eyed races (except for me)... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Gandhi: "I'd want it to convince the British to grant Indian independence... and overturn the cast system... and cause people of different colours to value and respect one another... and grant self-sustaining livelihoods to all the poor and oppressed of this world... and purge violence from the heart of men... and reconcile religions... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Moderator: "And if instead you had a superintelligent Oracle, what would you want it to do?"
Hitler and Gandhi together: "Stay inside the box and answer questions accurately".
But the title of your post talks about how a safe Oracle AI is easier than a safe general AI. Whose questions would be safe to answer?
If an Oracle AI could be used to help spawn friendly AI then it might be a possibility to consider, but under no circumstances I would call it safe as long as it isn't already friendly.
Relying upon humans to ask the right questions, how long is that going to work out until someone asks a question that returns dangerous knowledge?
You'd be basically forced to ask dangerous questions anyway because once you can build an Oracle AI you would have to expect others to be able to build one too and ask stupid questions.
If we had a truly safe oracle, we could ask it questions about the consequences of doing certain thing, and knowing certain things.
I can see society adapting stably to a safe oracle without needing it to be friendly.