Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: UnclGhost 28 November 2010 09:59:56PM *  3 points [-]

Something else that humans generally value is autonomy. Why not just make an optional colony of superhappiness?

Comment author: DrRobertStadler 14 September 2011 12:49:59AM 9 points [-]

At what point do children get to choose it?

Comment author: Jack 13 September 2011 08:36:33PM 2 points [-]

Interesting handle.

Comment author: DrRobertStadler 13 September 2011 09:56:35PM 0 points [-]

Thank you.

Comment author: PhilGoetz 15 May 2011 04:50:15AM *  4 points [-]

I agree entirely with both of wedifrid's comments above. Just read the CEV document, and ask, "If you were tasked with implementing this, how would you do it?" I tried unsuccessfully many times to elicit details from Eliezer on several points back on Overcoming Bias, until I concluded he did not want to go into those details.

One obvious question is, "The expected value calculations that I make from your stated beliefs indicate that your Friendly AI should prefer killing a billion people over taking a 10% chance that one of them is developing an AI; do you agree?" (If the answer is "no", I suspect that is only due to time discounting of utility.)

Comment author: DrRobertStadler 13 September 2011 08:51:02PM 1 point [-]

Surely though if the FAI is in a position to be able to execute that action, it is in a position where it is so far ahead of an AI someone could be developing that it would have little fear of that possibility as a threat to CEV?

Comment author: DrRobertStadler 13 September 2011 08:31:56PM 4 points [-]

Hi.