Caledonian2 comments on Guardians of the Gene Pool - Less Wrong

25 Post author: Eliezer_Yudkowsky 16 December 2007 08:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Caledonian2 18 December 2007 01:53:30PM -1 points [-]

Caledonian - I'd say that one of the key concepts in my current understanding of the Singularity is that it's the polar opposite of a hard-wired goal. Surely the very idea is that we don't know what happens inside/beyond a singularity, hence the name?

The whole point of attempting a "Friendly AI" is that its proponents believe that it IS possible to exclude entire branches of possibility from an AI's courses of action - that the superhuman intelligence can be made safe. Not merely friendly in a human sense, but favorable to human interests, not 'evil'.

Of course, they cannot provide an objective and rigorous description of what "being in human interests" actually entails, nor can they explain clearly what 'evil' is. But they know it when they see it, apparently. And since many of them seem to believe that 'values' are arbitrary, they've never bothered subjecting what they value to analysis.

Perhaps the possibility that a consequence of an entity being utterly good might be its being utterly unsafe has never occurred to them. And perhaps the possibility that superhuman general intelligence might analyze their values and find them lacking never occurred to them either. That would explain a lot.

Comment author: pnrjulius 09 June 2012 02:40:08AM -1 points [-]

Why would being good make you unsafe?

Comment author: Ben_Welchner 09 June 2012 03:16:02AM *  0 points [-]

Caledonian hasn't posted anything since 2009, if you said that in hopes of him responding.