Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on That Alien Message - Less Wrong

111 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 22 May 2008 09:46:36PM 5 points [-]

@RI: Immoral, of course. A Friendly AI should not be a person. I would like to know at least enough about this "consciousness" business to ensure a Friendly AI doesn't have (think it has) it. An even worse critical failure is if the AI's models of people are people.

The most accurate possible map of a person will probably tend to be a person itself, for obvious reasons.

Comment author: pnrjulius 09 April 2012 04:53:27AM -2 points [-]

Why wouldn't you want your AI to have feelings? I would want it to have feelings. When a superintelligence runs the world, I want it to be one that has feelings---perhaps feelings even much like my own.

As for the most accurate map being the territory, that's such a basic error I don't feel the need to explain it further. The territory is not a map; therefore it cannot be an accurate map.

Comment author: MBlume 16 April 2012 09:21:29PM 2 points [-]

pnrjulius: he answered this a little later: http://lesswrong.com/lw/x7/cant_unbirth_a_child/