Stuart_Armstrong comments on Tools want to become agents - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (81)
Thanks, that's another good point!
I think the concepts are clear at the extremes, but they tend to get muddled in the middle.
Do you believe that humans are agents? If so, what would you have to do to a human brain in order to turn a human into the other extreme, a clear tool?
I could ask the same about C. elegans. If C. elegans is not an agent, why not? If it is, then what would have to change in order for it to become a tool?
And if these distinctions don't make sense for humans or C. elegans, then why do you expect them to make sense for future AI systems?
Both your examples are agents currently. A calculator is a tool.
Anyway, I've still got a lot more work to do before I seriously discuss this issue.
I'd be especially interested in edge cases. Is e.g. Google's driverless car closer to being an agent than a calculator? If that is the case, then if intelligence is something that is independent of goals and agency, would adding a "general intelligence module" make Google's driverless dangerous? Would it make your calculator dangerous? If so, why would it suddenly care to e.g. take over the world if intelligence is indeed independent of goals and agency?
A driverless car is firmly is on the agent side of the fence, by my defintions. Feel free to state your own, anybody.
A cat's an agent. It has goals it works towards. I've seen cats manifest creativity that surprised me.
Why is that surprising? Does anyone think that "agent" implies human level intelligence?