Nebu comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
One definition of intelligence that I've seen thrown around on LessWrong is it's the ability to figure out how to steer reality in specific directions given the resources available.
Both the tool and the agent are intelligent in the sense that, assuming they are given some sort of goal, they can formulate a plan on how to achieve that goal, but the agent will execute the plan, while the tool will report the plan.
I'm assuming for the sake of isolating the key difference, that for both the tool-AI and the agent-AI, they are "passively" waiting for instructions for a human before they spring into action. For an agent-AI, I might say "Take me to my house", whereas for a tool AI, I would say "What's the quickest route to get to my house?", and as soon as I utter these words, suddenly the AI has a new utility function to use in evaluate any possible plan it comes up with.
Assuming it's always possible to decouple "ability to come up with a plan" from both "execute the plan" and "display the plan", then any "tool" can be converted to an "agent" by replacing every instance of "display the plan" to "execute the plan" and vice versa for converting an agent into a tool.