Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

JulianMorrison comments on That Alien Message - Less Wrong

111 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread.

Comment author: JulianMorrison 24 May 2008 07:08:00PM 0 points [-]

Hmm, you've convinced me that it would be best to start with a non-person. In fact, if becoming a person was Friendly, the FAI would self-program that way anyhow. (Assumption: a FAI is still a runaway unstoppable super-mind, it's just one whose goals are aligned with Friendliness. So if it decides X is Friendly, bet your bippy X will happen, and fast, and it won't take no for an answer.)

What I'm still confused by is: what does person / non-person really mean? When I try to think of the idea of "person" I keep running into human assumptions.

What human / humane traits would you exclude or build in to seed AI? Which ones do you expect to be emergent given high enough universal intelligence? (I'm thinking like: "is it possible to speak, and understand, and not be a person?" Am I being parochial?)