It doesn't seem that scary to me. I don't see it as substituting "its own judgement" for ours. It doesn't have a judgement of its own. Rather, it believes (trivially correctly) that if we were wiser, we would be wiser than we are now. And if it can reliably figure out what a wiser version of us would say, it substitutes that person's judgement for ours.
I suppose I imagine that if told I shouldn't try to solve death, I would direct the person to LessWrong, try to explain to them the techniques of rationality, refer them to a rationalist dojo, etc. until they're a good enough rationalist they can avoid reproducing memes they don't really believe in -- then ask them again.
The AI with massively greater resources can of course simulate all this instead, saving a lot of time. And the benefit of the AI's method is that when the "simulation" says "I wish the AI had started preventing death right away instead of waiting for me to become a rationalist", The AI can grant this wish!
The AI doesn't inherently know what's good or bad. It doesn't even know what it should be surprised by (only transhumanists seem to realise that "let's not prevent death" shouldn't make sense). It can only find out by asking us, and of course the right answer is more likely to be given by a "wise" person. So the best way for the AI to find out what is right or wrong is to make everyone as wise as possible, then ask them (or predict what would happen if it did).
"What would I do if I were wiser?" may not be a meaningful question. Your current idea of wisdom is shaped by your current limitations.
At least the usual idea of wisdom is that it's acquired through experience, and how can you know how more experience will affect you? Even your idea of wisdom formed by observing people who seem wiser than yourself is necessarily incomplete. All you can see is effects of a process you haven't incorporated into yourself.
I know Wei Dai has criticized CEV as a construct, I believe offering the alternative of rigorously specifying volition *before* making an AI. I couldn't find these posts/comments via a search, can anyone link me? Thanks.
There may be related top-level posts, but there is a good chance that what I am specifically thinking of was a comment-level conversation between Wei Dai and Vladimir Nesov.
Also feel free to use this thread to criticize CEV and to talk about other possible systems of volition.