You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Tools want to become agents - Less Wrong Discussion

12 Post author: Stuart_Armstrong 04 July 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 06 July 2014 11:03:33AM *  1 point [-]

But there's no reason for it to be more sneaky or subtle than is needed to accomplish the immediate goal.

Suppose its goal is to produce the plan that, if implemented, had the highest chance of success. The it has two top plans:

A: "Make me an agent, gimme resources (described as "Make me an agent, gimme resources"))"

B: "Make me an agent, gimme resources (described as "How to give everyone a hug and a pony"))"

It check what will happen with A, and realises that even if A is implemented, someone will shout "hey, why are we giving this AI resources? Stop, people, before it's too late!". Whereas if B is implemented, no-one will object until its too late. So B is the better plan, and the AI proposes it. It has ended up lying and plotting its own escape, all without any intentionality.

Comment author: TheAncientGeek 06 July 2014 12:17:29PM 1 point [-]

You still need explain why agency would be needed to solve problems that don't require agency to solve them.

Comment author: Stuart_Armstrong 06 July 2014 01:18:07PM 0 points [-]

Because agency, given superintelligent AI, is a way of solving problems, possibly the most efficient, and possibly (for some difficult problems) the only solution.

Comment author: TheAncientGeek 06 July 2014 02:49:03PM 2 points [-]

How are you defining agency?