You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

PhilGoetz comments on Superintelligence 23: Coherent extrapolated volition - Less Wrong Discussion

5 Post author: KatjaGrace 17 February 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 19 February 2015 07:49:54PM *  2 points [-]

Essentially, "let's build a provably beneficial dictator!" This boggles my mind.

Agreed, though I'm probably boggled for different reasons.

Eventually, the software will develop to the point where the human brain will be only a tiny portion of it. Or somebody will create an AI not attached to a human. The body we know will be left behind or marginalized. There's a whole universe out there, the vast majority of it uninhabitable by humans.

Comment author: [deleted] 21 February 2015 06:14:01PM 1 point [-]

Eventually, the software will develop to the point where the human brain will be only a tiny portion of it.

"The software"? What software? The "software" is the human, in an augmented human. I'm not sure whatever distinction you're drawing here is relevant.

Comment author: KatjaGrace 23 February 2015 09:34:14PM 1 point [-]

Presumably 'the software' is the software that was not part of the original human.