SforSingularity comments on That Alien Message - Less Wrong

108 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Richard_Hollerith 23 May 2008 08:00:00PM 4 points [-]

RI asks,

how moral or otherwise desirable would the story have been if half a billion years' of sentient minds had been made to think, act and otherwise be in perfect accordance to what three days of awkward-tentacled, primitive rock fans would wish if they knew more, thought faster, were more the people they wished they were...

Eliezer answers,

A Friendly AI should not be a person. I would like to know at least enough about this "consciousness" business to ensure a Friendly AI doesn't have (think it has) it. An even worse critical failure is if the AI's models of people are people.

Suppose consciousness and personhood are mistaken concepts. Well, since personhood is an important concept in our legal systems, there is something in reality (namely, in the legal environment) that corresponds to the term "person", but suppose there is not any "objective" way to determine whether an intelligent agent is a person where "objective" means without someone creating a legal definition or taking a vote or something like that. And suppose consciousness is a mistaken concept like phlogiston, the aether and the immortal soul are mistaken concepts. Then would not CEV be morally unjustifiable because there is no way to justify the enslavement -- or "entrainment" if you want a less loaded term -- of the FAI to the (extrapolated) desires of the humans?

Comment author: SforSingularity 15 August 2009 01:22:48PM 1 point [-]

Suppose consciousness and personhood are mistaken concepts.

This is almost certainly the case IMO.