private_messaging comments on Tool for maximizing paperclips vs a paperclip maximizer - Less Wrong

3 Post author: private_messaging 12 May 2012 07:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: private_messaging 14 May 2012 11:33:12PM *  1 point [-]

I don't think humans do have such real world volition, regardless of simulation hypothesis being true or false. Humans seem to have a blacklist of solutions that are deemed wrong, and that's it. The blacklist gets selected by the world (those using bad blacklists don't reproduce a whole lot), but isn't really product of reasoning, and the effective approach to reproduction relies on entirely fake ultimate goals (religion), and seem to work only for a low part of intelligence range.

Agents includes humans by definition, but doesn't mean humans will have attributes that you think agents should have.

Comment author: [deleted] 15 May 2012 12:42:34AM 1 point [-]

If not even humans satisfy your definition of agent (which was, at least a couple comments ago, a tool possessing "real world intentionality"), then why is your version of the tool/agent distinction worthwhile?

Comment author: asr 15 May 2012 01:22:10AM 0 points [-]

My impression is that the tool/agent distinction is really about whether we use the social-modeling parts of our brain. It's a question not about the world as much as about what's a fruitful outlook. Modeling humans as humans works well -- we are wired for this. Anthropomorphizing the desires of software or robots is only sometimes useful.