You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mark_Friedenbach comments on Superintelligence 23: Coherent extrapolated volition - Less Wrong Discussion

5 Post author: KatjaGrace 17 February 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 17 February 2015 09:04:14AM 2 points [-]

Or 3) Don't pass control to AIs at all. Don't even build agent-y AIs. Augment humans instead.

Comment author: PhilGoetz 17 February 2015 05:01:27PM 2 points [-]

This may be a good way to start, but it eventually leads to the same place.

Comment author: [deleted] 17 February 2015 09:05:11PM 4 points [-]

I think you'll need to explain that because I don't see that at all. We've made life a lot better for most people on this planet by creating power-sharing arrangements that limit any single person's autocratic powers, and expanding franchise to all. Yet I see many people here advocating basically a return to autocratic rule by our AI overlords, with no vote for the humans left behind. Essentially, "let's build a provably beneficial dictator!" This boggles my mind.

The alternative is to decentralize transhumanist technology and push as many people a possible through an augmentation pathway in lockstep, preserving our democratic power structures. This sidesteps the friendly AI problem entirely.

Comment author: PhilGoetz 19 February 2015 07:49:54PM *  2 points [-]

Essentially, "let's build a provably beneficial dictator!" This boggles my mind.

Agreed, though I'm probably boggled for different reasons.

Eventually, the software will develop to the point where the human brain will be only a tiny portion of it. Or somebody will create an AI not attached to a human. The body we know will be left behind or marginalized. There's a whole universe out there, the vast majority of it uninhabitable by humans.

Comment author: [deleted] 21 February 2015 06:14:01PM 1 point [-]

Eventually, the software will develop to the point where the human brain will be only a tiny portion of it.

"The software"? What software? The "software" is the human, in an augmented human. I'm not sure whatever distinction you're drawing here is relevant.

Comment author: KatjaGrace 23 February 2015 09:34:14PM 1 point [-]

Presumably 'the software' is the software that was not part of the original human.