You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

John_Maxwell_IV comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong Discussion

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: John_Maxwell_IV 04 March 2012 11:51:25PM 2 points [-]

As you note, humans aren't human-friendly intelligences, or we wouldn't have internal existential risk.

It's possible that particular humans might approximate human friendly intelligences.

Comment author: David_Gerard 05 March 2012 08:02:55AM -1 points [-]

Assuming it's not impossible, how would you know? What constitutes a human-friendly intelligence, in other than negative terms?