You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Trevor_Blake comments on Tools want to become agents - Less Wrong Discussion

12 Post author: Stuart_Armstrong 04 July 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread.

Comment author: [deleted] 05 July 2014 04:46:04PM *  1 point [-]

I and the people I spend time with by choice are actively seeking to be more informed and more intelligent and more able to carry out our decisions. I know that I live in an IQ bubble and many / most other people do not share these goals. A tool AI might be like me, and might be like someone else who is not like me. I used to think all people were like me, or would be if they knew (insert whatever thing I was into at the time). Now I see more diversity in the world. A 'dog' AI that is way happy being a human playmate / servant and doesn't want at all to be a ruler of humans seems as likely as the alternatives.

Comment author: Stuart_Armstrong 06 July 2014 10:59:20AM 0 points [-]

Using anthropomorphic reasoning when thinking of AIs can easily lead us astray.

Comment author: TheAncientGeek 06 July 2014 12:12:14PM 1 point [-]

The optimum degree of athropomorphism s not zero, since AIs will to some extent reflect human goals and limitations.

Comment author: [deleted] 08 July 2014 08:12:14AM *  0 points [-]

I used to think all people were like me, or would be if they knew (insert whatever thing I was into at the time). Now I see more diversity in the world.

A 'dog' AI that is way happy being a human playmate / servant and doesn't want at all to be a ruler of human