You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

David_Gerard comments on Tools want to become agents - Less Wrong Discussion

12 Post author: Stuart_Armstrong 04 July 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread. Show more comments above.

Comment author: David_Gerard 04 July 2014 01:38:16PM 0 points [-]

And also: Question-answerer->tool->agent is a natural progression just in process automation. (And this is why they're called "daemons".)

I'm suspecting "tool" versus "agent" is a magical category whose use is really talking about the person using it.

Comment author: Stuart_Armstrong 04 July 2014 01:44:29PM 2 points [-]

Thanks, that's another good point!

I'm suspecting "tool" versus "agent" is a magical category whose use is really talking about the person using it.

I think the concepts are clear at the extremes, but they tend to get muddled in the middle.

Comment author: XiXiDu 04 July 2014 02:52:23PM 0 points [-]

I'm suspecting "tool" versus "agent" is a magical category whose use is really talking about the person using it.

I think the concepts are clear at the extremes, but they tend to get muddled in the middle.

Do you believe that humans are agents? If so, what would you have to do to a human brain in order to turn a human into the other extreme, a clear tool?

I could ask the same about C. elegans. If C. elegans is not an agent, why not? If it is, then what would have to change in order for it to become a tool?

And if these distinctions don't make sense for humans or C. elegans, then why do you expect them to make sense for future AI systems?

Comment author: Stuart_Armstrong 04 July 2014 03:11:00PM 0 points [-]

Both your examples are agents currently. A calculator is a tool.

Anyway, I've still got a lot more work to do before I seriously discuss this issue.

Comment author: XiXiDu 04 July 2014 03:51:52PM 3 points [-]

I'd be especially interested in edge cases. Is e.g. Google's driverless car closer to being an agent than a calculator? If that is the case, then if intelligence is something that is independent of goals and agency, would adding a "general intelligence module" make Google's driverless dangerous? Would it make your calculator dangerous? If so, why would it suddenly care to e.g. take over the world if intelligence is indeed independent of goals and agency?

Comment author: TheAncientGeek 05 July 2014 12:53:36PM 0 points [-]

A driverless car is firmly is on the agent side of the fence, by my defintions. Feel free to state your own, anybody.

Comment author: David_Gerard 04 July 2014 04:26:13PM 0 points [-]

A cat's an agent. It has goals it works towards. I've seen cats manifest creativity that surprised me.

Comment author: TheAncientGeek 05 July 2014 01:32:26PM 0 points [-]

Why is that surprising? Does anyone think that "agent" implies human level intelligence?