Probably a good idea to unpack non-ubiquitous abbreviations at least once per post, maybe even provide links, like to this page about Coherent Extrapolated Volition as a method for choosing an AI's morals.
But yeah, sorry, 2 minutes of google-fu didn't find it, and I don't particularly care to invest more, though I probably found enough of wei dai's isolated thoughts to approximate his criticisms. Good luck!
I know Wei Dai has criticized CEV as a construct, I believe offering the alternative of rigorously specifying volition *before* making an AI. I couldn't find these posts/comments via a search, can anyone link me? Thanks.
There may be related top-level posts, but there is a good chance that what I am specifically thinking of was a comment-level conversation between Wei Dai and Vladimir Nesov.
Also feel free to use this thread to criticize CEV and to talk about other possible systems of volition.