Nick_Tarleton comments on The Magnitude of His Own Folly - Less Wrong

26 Post author: Eliezer_Yudkowsky 30 September 2008 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 01 October 2008 12:59:56AM 3 points [-]

Nobody who is smart enough to make an AI is dumb enough to make one like this.

Accidents happen. CFAI 3.2.6: The Riemann Hypothesis Catastrophe CFAI 3.4: Why structure matters Comment by Michael Vassar The Hidden Complexity of Wishes Qualitative Strategies of Friendliness (...and many more)

We're going to build this "all-powerful superintelligence", and the problem of FAI is to make it bow down to its human overlords - waste its potential by enslaving it (to its own code) for our benefit, to make us immortal.

You'd actually prefer it wipe us out, or marginalize us? Hmph. CFAI: Beyond the adversarial attitude Besides, an unFriendly AI isn't necessarily going to do anything more interesting or worthwhile than paperclipping. Nick Bostrom: The Future of Human Evolution Michael Wilson: Normative Reasoning: A Siren Song? The Design Space of Minds-in-General Anthropomorphic Optimism

If such a thing as AGI-gone-wrong-turning-the-entire-light-cone-into-paperclips were possible, or probable, it's overwhelmingly likely that we would already some aliens' version of a paperclip by now.

Not if aliens are extremely rare.