Eliezer_Yudkowsky comments on You Provably Can't Trust Yourself - Less Wrong

18 Post author: Eliezer_Yudkowsky 19 August 2008 08:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 20 August 2008 01:45:49PM 18 points [-]

If you go back and check, you will find that I never said that extrapolating human morality gives you a single outcome. Be very careful about attributing ideas to me on the basis that others attack me as having them.

The "Coherent" in "Coherent Extrapolated Volition" does not indicate the idea that an extrapolated volition is necessarily coherent.

The "Coherent" part indicates the idea that if you build an FAI and run it on an extrapolated human, the FAI should only act on the coherent parts. Where there are multiple attractors, the FAI should hold satisficing avenues open, not try to decide itself.

The ethical dilemma arises if large parts of present-day humanity are already in different attractors.