Allan_Crossman comments on The Meaning of Right - Less Wrong

30 Post author: Eliezer_Yudkowsky 29 July 2008 01:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (147)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Allan_Crossman 30 July 2008 09:43:00PM 0 points [-]

We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted.

I believe Eliezer is trying to create "fully recursive self-modifying agents that retain stable preferences while rewriting their source code". Like Sebastian says, getting the "stable preferences" bit right is presumably necessary for Friendly AI, as Eliezer sees it.

(This clause "as Eliezer sees it" isn't meant to indicate dissent, but merely my total incompetence to judge whether this condition is strictly necessary for friendly AI.)