whpearson comments on What would you do if AI were dangerous? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (27)
We have an existence proof of intelligences based upon "The type of systems humans are", we don't for pure maximizers. It is no good trying to develop friendliness theory based upon a pure easily reasoned about system if you can't make an intelligence out of it.
So while it is harder, this may be the sort of system we have to deal with. It is these sorts of questions I wanted to try to answer with the group in my original post.
I'll try to explain why I am sceptical of maximizer based intelligences in a discussion post. It is not because they are inhuman.