It [FAI] doesn't have a judgement of its own.
[...]
And if it [FAI] can reliably figure out what a wiser version of us would say, it substitutes that person's judgement for ours.
[...]
I would direct the person to LessWrong, [...] until they're a good enough rationalist [...] -- then ask them again.
It seems you have a flaw in your reasoning. You will direct a person to LessWrong, someone else will direct a person to church. And FAI should figure out somehow which direction a person should take to be wiser, without a judgment of its own.
That's true.
According to the 2004 paper, Eliezer thinks (or thought, anyway) "what we would decide if we knew more, thought faster, were more the people we wished we were, had grown up farther together..." would do the trick. Presumably that's the part to be hard-coded in. Or you could extrapolate (using the above) what people would say "wisdom" amounts to and use that instead.
Actually, I can't imagine someone who knew and understood both the methods of rationality (having been directed to LessWrong) and all the teachings of the church ...
I know Wei Dai has criticized CEV as a construct, I believe offering the alternative of rigorously specifying volition *before* making an AI. I couldn't find these posts/comments via a search, can anyone link me? Thanks.
There may be related top-level posts, but there is a good chance that what I am specifically thinking of was a comment-level conversation between Wei Dai and Vladimir Nesov.
Also feel free to use this thread to criticize CEV and to talk about other possible systems of volition.