dlthomas comments on So You Want to Save the World - Less Wrong

41 Post author: lukeprog 01 January 2012 07:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dwelle 07 January 2012 09:38:05PM 0 points [-]

That's why I said, that they can change it anytime they like. If they don't desire the change, they won't change it. I see nothing incoherent there.

Comment author: dlthomas 08 January 2012 08:00:55PM 1 point [-]

This is like "X if 1 + 2 = 5". Not necessarily incorrect, but a bizarre statement. An agent with a single, non-reflective goal cannot want to change its goal. It may change its goal accidentally, or we may be incorrect about what its goals are, or something external may change its goal, or its goal will not change.

Comment author: Dwelle 08 January 2012 10:02:08PM 0 points [-]

I don't know, perhaps we're not talking about the same thing. It won't be an agent with a single, non-reflective goal, but an agent billion times more complex than a human; and all I am saying is, that I don't think it will matter much, whether we imprint in it a goal like "don't kill humans" or not. Ultimately, the decision will be its own.