DefectiveAlgorithm comments on Superintelligence 19: Post-transition formation of a singleton - Less Wrong

7 Post author: KatjaGrace 20 January 2015 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: timeholmes 21 January 2015 09:52:40PM 0 points [-]

I keep returning to one gnawing question that haunts the whole idea of Friendly AI: how do you program a machine to "care"? I can understand how a machine can appear to "want" something, favoring a certain outcome over another. But to talk about a machine "caring" is ignoring a very crucial point about life: that as clever as intelligence is, it cannot create care. We tend to love our kid more than someone else's. So you could program a machine to prefer another in which it recognizes a piece of its own code. That may LOOK like care but it's really just an outcome. How could you replicate, for example, the love a parent shows for a kid they didn't produce? What if that kid were humanity? So too with Coherent Extrapolated Volition, you can keep refining the resolution of an outcome, but I don't see how any mechanism can actually care about anything but an outcome.

While "want" and "prefer"may be useful terms, such terms as "care", "desire", "value" constitute an enormous and dangerous anthropomorphizing. We cannot imagine outside our own frame, and this is one place where that gets us into real trouble. Once someone creates a code that will recognize something truly metaphysical I would be convinced that FAI is possible. Even whole brain emulation assumes that both that our thoughts are nothing but code and a brain with or without a body is the same thing. Am I missing something?

Comment author: DefectiveAlgorithm 22 January 2015 04:52:54AM 3 points [-]

Leaving aside other matters, what does it matter if an FAI 'cares' in the sense that humans do so long as its actions bring about high utility from a human perspective?

Comment author: timeholmes 23 January 2015 01:21:46AM 3 points [-]

Because what any human wants is a moving target. As soon as someone else delivers exactly what you ask for, you will be disappointed unless you suddenly stop changing. Think of the dilemma of eating something you know you shouldn't. Whatever you decide, as soon as anyone (AI or human) takes away your freedom to change your mind, you will likely rebel furiously. Human freedom is a huge value that any FAI of any description will be unable to deliver until we are no longer free agents.

Comment author: DefectiveAlgorithm 24 January 2015 12:15:14PM *  3 points [-]

What would an AI that 'cares' in the sense you spoke of be able to do to address this problem that a non-'caring' one wouldn't?