JGWeissman comments on Fusing AI with Superstition - Less Wrong

-6 Post author: Drahflow 21 April 2010 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 22 April 2010 05:42:17PM 3 points [-]

The paperclipper is a strawman.

The paperclipper is a commonly referred to example.

Paperclippers would be at a powerful evolutionary/competitive disadvantage WRT non-paperclippers.

We are considering the case where one AGI gets built. There is no variation to apply selection pressure to.

I don't see why you would think paperclippers would constitute the majority of all possible AIs.

I never said I did. This argument would be an actual straw man.

Something that helps only with non-paperclippers would still be very useful.

The paperclipper is an example of the class of AIs with simplistic goals, and the scenarios are similar for smiley face maximizers and orgasium maximizers. Most AI's that fail to be Friendly will not have "kill all humans" as an intrinsic goal, so depending on them having "kill all humans" as an instrumental goal is dangerous, because they are likely to kill us out of indifference to that side effect of achieving their actual goals. Also consider near-miss AIs that create a distopian future but don't kill all humans.