You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

whpearson comments on What would you do if AI were dangerous? - Less Wrong Discussion

8 Post author: cousin_it 26 July 2011 09:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 27 July 2011 04:50:20PM 3 points [-]

We have an existence proof of intelligences based upon "The type of systems humans are", we don't for pure maximizers. It is no good trying to develop friendliness theory based upon a pure easily reasoned about system if you can't make an intelligence out of it.

So while it is harder, this may be the sort of system we have to deal with. It is these sorts of questions I wanted to try to answer with the group in my original post.

I'll try to explain why I am sceptical of maximizer based intelligences in a discussion post. It is not because they are inhuman.