You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Nymogenous comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread.

Comment author: Nymogenous 19 December 2011 01:04:58PM *  4 points [-]

No, I don't think it's possible. I mean, seriously, humans aren't even provably friendly to us and we have thousands of years of practice negotiating with them.

Not sure this is a fair comparison for 2 reasons: 1) We don't have the complete source code to human consciousness yet, so we can't do a good analysis of it, and 2) If anything primates are provably unfriendly to each other (at least outside their tribal group).

EDIT: Yes, I realize that a human genome is sort of a source code to our behavior, but having it without a complete theory of physics is rather like being given the source code to an AI in an unknown format.

Comment author: JoshuaZ 19 December 2011 05:20:17PM *  4 points [-]

Yes, I realize that a human genome is sort of a source code to our behavior, but having it without a complete theory of physics is rather like being given the source code to an AI in an unknown forma

Having the exact laws of physics here probably doesn't matter as much as simply having a better understanding of human development. The genome isn't all that matters. What proteins are in the egg at the start matter a lot, and there are things like epigenetics. And the computational level involved in trying to model anything in the human body reliably is immense. The fundamental laws of physics probably don't matter much for human behavior.