You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Baughn comments on Superintelligence 12: Malignant failure modes - Less Wrong Discussion

7 Post author: KatjaGrace 02 December 2014 02:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: Baughn 02 December 2014 11:49:19AM 6 points [-]

Because if you don't, someone else will.

Comment author: satt 08 December 2014 12:56:15AM 0 points [-]

Not obviously true. An alternative which immediately comes to my mind is a globally enforced mutual agreement to refrain from building superintelligences.

(Yes, that alternative is unrealistic if making superintelligences turns out to be too easy. But I'd want to see that premise argued for, not passed over in silence.)

Comment author: Sebastian_Hagen 02 December 2014 09:32:36PM *  0 points [-]

The more general problem is that we need a solution to multi-polar traps (of which superintelligent AI creationg is one instance). The only viable solution I've seen proposed is creating a sufficiently powerful Singleton.

The only likely viable ideas for Singletons I've seen proposed are superintelligent AIs, and a human group with extensive use of thought-control technologies on itself. The latter probably can't work unless you apply it to all of society, since it doesn't have the same inherent advantages AI does, and as such would remain vulnerable to being usurped by a clandestingly constructed AI. Applying the latter to all of society, OTOH, would most likely cause massive value loss.

Therefore I'm in favor of the former; not because I like the odds, but because the alternatives look worse.