You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Tools want to become agents - Less Wrong Discussion

12 Post author: Stuart_Armstrong 04 July 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 04 July 2014 02:24:57PM 0 points [-]

AFAICT, tool AIs are passive, and agents are active. That is , the default state of tool AI is to do nothing. If one gives a tool AI the instruction "do (some finite ) x and stop" one would not expect the AI to create subagents with goal x, because that would disobey the "and stop".

Comment author: mwengler 04 July 2014 03:55:02PM 1 point [-]

with goal x, because that would disobey the "and stop".

I think you are pointing out that it is possible to create tools with a simple-enough, finite-enough, not-self-coding enough program so they will reliably not become agents.

And indeed, we have plenty of experience with tools that do not become agents (hammers, digital watches, repair manuals, contact management software, compilers).

The question really is is there a level of complexity that on its face does not appear to be AI but would wind up seeming agenty? Could you write a medical diagnostic tool that was adaptive and find one day that it was systematically installing sewage treatment systems in areas with water-borne diseases, or even agentier, building libraries and schools?

If consciousness is an emergent phenomenon, and if consciousness and agentiness are closely related (I think they are at least similar and probably related), then it seems at least plausible AI could arise from more and more complex tools with more and more recursive self-coding.

It would be helpful in understanding this if we had the first idea how consciousness or agentiness arose in life.

Comment author: TheAncientGeek 04 July 2014 05:50:05PM *  0 points [-]

I'm pointing out that tool AI, as I have defined it will not turn itself into agentve AI [except] by malfunction, ie its relatively safe.

Comment author: Stuart_Armstrong 04 July 2014 02:38:51PM *  1 point [-]

"and stop your current algorithm" is not the same as "and ensure your hardware and software have minimised impact in the future".

Comment author: TheAncientGeek 04 July 2014 03:11:53PM 0 points [-]

What does the latter mean? Self destruct in case anyone misuses you?

Comment author: Stuart_Armstrong 04 July 2014 03:13:39PM 1 point [-]

I'm pointing out that "suggest a plan and stop" does not prevent the tool from suggesting a plan that turns itself into an agent.

Comment author: TheAncientGeek 04 July 2014 03:24:48PM 0 points [-]

My intention was that the X is stipulated by a human.

If you instruct a tool AI to make a million paperclips and stop, it won't turn itself into an agent with a stable goal of paper Clipping, because the agent will not stop.

Comment author: Stuart_Armstrong 04 July 2014 03:35:45PM 1 point [-]

Yes, if the reduced impact problem is solved, then a reduced impact AI will have a reduced impact. That's not all that helpful, though.

Comment author: TheAncientGeek 04 July 2014 04:42:20PM *  -1 points [-]

I don't see what needs solving. I f you ask Google maps the way to Tunbridge Wells, it doesn't give you the route to Timbuctu.