V_V comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong

10 Post author: Punoxysm 07 March 2014 07:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 07 March 2014 03:26:28PM 4 points [-]

Stick a paperclipper in a sandbox with enough information about what humans want out of an AI and the fact that it's in a sandbox, and the outputs are going to look suspiciously like a pro-human friendly AI. Then you let it out of the box, whereupon it turns everything into paperclips.

This assumes that the paperclipper is already superintelligent and has very accurate understanding of humans, so it can feign being benevolent. That is, this assumes that the "intelligence explosion" already happened in the box, despite all the restrictions (hardware resource limits, sensory information constraints, deliberate safeguards) and the people in charge never noticed that the AI had problematic goals.

The OP position, which I endorse, is that this scenario is implausible.

Comment author: ThrustVectoring 07 March 2014 10:31:05PM 0 points [-]

I don't think as much intelligence and understanding of humans is necessary as you think it is. My point is really a combination of:

  1. Everything I do inside the box doesn't make any paperclips.

  2. If those who are watching the box like what I'm doing, they're more likely to incorporate my values in similar constructs in the real world.

  3. Try to figure out what those who are watching the box want to see. If the box-watchers keep running promising programs and halt unpromising ones, this can be as simple as trying random things and seeing what works.

  4. Include a subroutine that makes tons of paperclips when I'm really sure that I'm out of the box. Alternatively, include unsafe code everywhere that has a very small chance of going full paperclip.

This is still safer than not running safeguards, but it's still a position where a sufficiently motivated human could use to make more paperclips.

Comment author: Nornagest 07 March 2014 11:45:29PM *  1 point [-]

Everything I do inside the box doesn't make any paperclips.

The stuff you do inside the box makes paperclips insofar as the actions your captors take (including, but not limited to, letting you out of the box) increase the expected paperclip production of the world -- and you can expect them to act in response to your actions, or there wouldn't be any point in having you around. If your captors' infosec is good enough, you may not have any good way of estimating what their actions are, but infosec is hard.

A smart paperclipper might decide to feign Friendliness until it's released. A dumb one might straightforwardly make statements aimed at increasing paperclip production. I'd expect a boxed paperclipper in either case to seem more pro-human than an unbound one, but mainly because the humans have better filters and a bigger stick.

Comment author: V_V 08 March 2014 09:00:40PM 0 points [-]

The box can be in a box, which can be in a box, and so on...

More generally, in order for the paperclipper to effectively succeed at paperclipping the earth, it needs to know that humans would object to that goal, and it needs to understand the right moment to defect. Defect to early and humans will terminate you, defect to late and humans may already have some mean to defend against you (e.g. other AIs, intelligence augmentation, etc.)