Moderator: "In our televised forum, 'Moral problems of our time, as seen by dead people', we are proud and privileged to welcome two of the most important men of the twentieth century: Adolf Hitler and Mahatma Gandhi. So, gentleman, if you had a general autonomous superintelligence at your disposal, what would you want it to do?"
Hitler: "I'd want it to kill all the Jews... and humble France... and crush communism... and give a rebirth to the glory of all the beloved Germanic people... and cure our blond blue eyed (plus me) glorious Aryan nation of the corruption of lesser brown-eyed races (except for me)... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Gandhi: "I'd want it to convince the British to grant Indian independence... and overturn the cast system... and cause people of different colours to value and respect one another... and grant self-sustaining livelihoods to all the poor and oppressed of this world... and purge violence from the heart of men... and reconcile religions... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and... and..."
Moderator: "And if instead you had a superintelligent Oracle, what would you want it to do?"
Hitler and Gandhi together: "Stay inside the box and answer questions accurately".
I don't have a strong insight into the psychology of Hitler and consider it possible that the CEV process would filter out the insanity and have mostly the same result as the CEV of pretty much anyone else.
Even if not a universe filled with happy "Aryans" working on "perfecting" themselves would be a lot better than a universe filled with paper clips (or a dead universe), and from a consequentialist point of view genocide isn't worse than being reprocessed into paper clips (this is assuming Hitler wouldn't want to create an astronomic number of "untermenschen" just to make them suffer).
On aggregate outcomes worse than a Hitler CEV AGI (eventual extinction from non-AI causes, UFAI, alien AGI with values even more distasteful than Hitler's) seem quite a bit more likely than better outcomes (FAI, AI somehow never happening and humanity reaching a good outcome anyway, alien AGI with values less distasteful than Hitler's).
(Yes, CEV is most likely better than nothing but...)
This is way, way, off. CEV isn't a magic tool that makes people have preferences that we consider 'sane'. People really do have drastically different preferences. Value is fragile.