That is pretty much how I understood it too. It scares me. I would strongly prefer that it ask "Why not conquer death? I don't understand." Rather than just going ahead ignoring my stated preference. I dislike that it would substitute its judgment for mine simply because it believes it is wiser. You don't discover the volition of mankind by ignoring what mankind tells you.
It doesn't seem that scary to me. I don't see it as substituting "its own judgement" for ours. It doesn't have a judgement of its own. Rather, it believes (trivially correctly) that if we were wiser, we would be wiser than we are now. And if it can reliably figure out what a wiser version of us would say, it substitutes that person's judgement for ours.
I suppose I imagine that if told I shouldn't try to solve death, I would direct the person to LessWrong, try to explain to them the techniques of rationality, refer them to a rationalist dojo, etc...
I know Wei Dai has criticized CEV as a construct, I believe offering the alternative of rigorously specifying volition *before* making an AI. I couldn't find these posts/comments via a search, can anyone link me? Thanks.
There may be related top-level posts, but there is a good chance that what I am specifically thinking of was a comment-level conversation between Wei Dai and Vladimir Nesov.
Also feel free to use this thread to criticize CEV and to talk about other possible systems of volition.