Interesting exchange. It would be great if Luke or someone else at SIAI could address the question raised by Alan towards the end of the discussion:
I'm often puzzled by SIAI's focus on CEV in general. Do they really think it has a chance of being implemented? Only if they design an AGI in the basement does it seem possible. Otherwise, these decisions will be muddied by power politics, as most things are.
Maybe CEV can be a playground for thinking about general AGI goal systems. But talking as though it has much chance of coming to fruition, even if AGI does come about, seems odd to me.
I've begun an online discussion with Alan Dawrst (Brian Tomasik) of utilitarian-essays.com concerning Friendly AI and utilitarianism. Interested parties may wish to follow along or participate.
The forum thread now contains many overlapping discussions. For clarity, here's an index of the narrow, core discussion between Alan and I: