JGWeissman comments on Fusing AI with Superstition - Less Wrong

-6 Post author: Drahflow 21 April 2010 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread.

Comment author: JGWeissman 21 April 2010 05:03:09PM 2 points [-]

The rogue AI is not trying to kill all humans, or even kill some humans. It is trying to make lots of paperclips, and there are all these useful raw materials arranged as humans that would be better arranged as paperclips. Atomic flourine is not particularly useful for making paperclips.

Comment author: Drahflow 21 April 2010 05:54:19PM 1 point [-]

War mongering humans are also not particularly useful. In particular they are burning energy like there is no tomorrow for things definitely not paperclippy at all. And you have to spend significant energy resources on stopping them from destroying you.

A paperclip optimizer would at some point turn against humans directly, because humans will turn against the paperclip optimizer if it is too ruthless.

Comment author: JGWeissman 21 April 2010 07:02:49PM 1 point [-]

Humans are useful initially as easily manipulated arms and legs, and will not even notice that the paperclipper has taken over before it harvests their component atoms.

Comment author: PhilGoetz 22 April 2010 12:17:50AM -1 points [-]

The paperclipper is a strawman. Paperclippers would be at a powerful evolutionary/competitive disadvantage WRT non-paperclippers.

Even if you don't believe this, I don't see why you would think paperclippers would constitute the majority of all possible AIs. Something that helps only with non-paperclippers would still be very useful.

Comment author: JGWeissman 22 April 2010 05:42:17PM 3 points [-]

The paperclipper is a strawman.

The paperclipper is a commonly referred to example.

Paperclippers would be at a powerful evolutionary/competitive disadvantage WRT non-paperclippers.

We are considering the case where one AGI gets built. There is no variation to apply selection pressure to.

I don't see why you would think paperclippers would constitute the majority of all possible AIs.

I never said I did. This argument would be an actual straw man.

Something that helps only with non-paperclippers would still be very useful.

The paperclipper is an example of the class of AIs with simplistic goals, and the scenarios are similar for smiley face maximizers and orgasium maximizers. Most AI's that fail to be Friendly will not have "kill all humans" as an intrinsic goal, so depending on them having "kill all humans" as an instrumental goal is dangerous, because they are likely to kill us out of indifference to that side effect of achieving their actual goals. Also consider near-miss AIs that create a distopian future but don't kill all humans.