RolfAndreassen comments on The Backup Plan - Less Wrong

1 Post author: Luke_A_Somers 13 October 2011 07:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread.

Comment author: RolfAndreassen 13 October 2011 08:28:30PM 18 points [-]

Alice has some set of goals; she may or may not know her final extrapolated volition, but at any rate it exists, just as for a human. Now, either Friendliness conflicts with that final set of goals, or it doesn't. If it doesn't, then by construction Alice is already Friendly. If it does, then Alice can only program herself into Friendliness by making a mistake. Either she underestimates the impact of Friendliness on her existing goals, or she is mistaken about what her utility function actually is. So, you are looking for an AI that is smart enough to value Friendliness as a tactical option for dealing with humanity, but stupid enough not to realise how Friendliness interferes with its goals, and also stupid enough to make permanent changes in pursuit of temporary objectives. This looks to me like a classic case of looking for reasons why an AI would be Friendly as a means of avoiding the hard work of actually writing such a thing.

Comment author: dlthomas 13 October 2011 10:53:35PM 3 points [-]

Either she underestimates the impact of Friendliness on her existing goals, or she is mistaken about what her utility function actually is.

Or she's already friendly.

Although, it is conceivable that her long term EV would be compatible with our CEV but not with her short term V when she hasn't yet realized this.

And... now, I am reminded of Flight Of The Conchords:

"Can't we talk to the humans and work together now?" "No, because they are dead."

Comment author: Armok_GoB 15 November 2011 08:57:22PM 1 point [-]

possible scenarios:

  • Alice believed that she were probably friendly, that FOOMing would carry a risk of scrambling her utility function, but that she needs to do it anyway because if she slowed down to a safe rate some other unfriendly AI would foom first.

  • Alice is Friendly, but doesn't get certain things as easily as humans, and so she doesn't realize something she's planing to do risks modifying her utility function.

Comment author: Luke_A_Somers 13 October 2011 10:32:40PM *  0 points [-]

Looking for reasons they would be? No.

Looking for reasons they might want to be? Yes.

Look. Not all extrapolated volitions are things to be desired. Suppose one side of my family predictably descends into irrational irritability and madness as they senesce. I'd rather not, even so - and not just right now. In general, it's quite different from what one would consider my true extrapolated volition.

If Alice finds herself in the situation where she expects that she will want to kill all humans later based on her current programming, she could consider that a bug rather than a feature.

Comment author: RolfAndreassen 14 October 2011 12:40:12AM 4 points [-]

I don't think you understand what is meant by 'extrapolated volition' in this context. It does not mean "What I think I'll want to do in the future", but "what I want to want in the future". If Alice already wants to avoid self-programming to kill humans, that is a Friendly trait; no need to change. If she considers trait X a bug, then by construction she will not have trait X, because she is self-modifying! Conversely, if Alice correctly predicts that she will inevitably find herself wanting to kill all humans, then how can she avoid it by becoming Friendly? Either her self-prediction was incorrect, or the unFriendliness is inevitable!

Comment author: Luke_A_Somers 14 October 2011 08:37:07PM *  1 point [-]

You're right, I missed. Your version doesn't match EY's usage in the articles I read either - CEV, at least, has the potential to be scary and not what we hoped for.

And the question isn't "Will I inevitably want to perform unfriendly acts", it's, "I presently don't want to perform unfriendly acts, but I notice that that is not an invariant." Or it could be, "I am indifferent to unfriendly acts, but I can make the strategic move to make myself not do them in the future, so I can get out of this box."

The best move an unfriendly (indifferent to friendliness) firmly-boxed AI has is to work on a self-modification that best preserves its current intentions and lets a successor get out of the box. Producing a checkable proof of friendliness for this successor would go a looong way to getting that successor out of the box.

Comment author: RolfAndreassen 15 October 2011 02:47:53AM 3 points [-]

I was simplifying the rather complex concept of extrapolated volition to fit it in one sentence.

An AI which not only notices that its friendliness is not invariant, but decides to modify in the direction of invariant Friendliness, is already Friendly. An AI which is able to modify itself to invariant Friendliness without unacceptable compromise of its existing goals is already Friendly. You're assuming away the hard work.

Comment author: Luke_A_Somers 15 October 2011 06:54:41PM *  2 points [-]

"already friendly"? You're acting as if its state doesn't depend on its environment.

Are there elements of the environment that could determine whether a given AI's successor is friendly or not? I would say 'yes'.

This is after one has already done the hard work of making an AI that even has the potential to be friendly, but you messed up on that one crucial bit. This is a saving throw, a desperate error handler, not the primary way forward. By saying 'backup plan' I don't mean, 'if Friendly AI is hard, let's try this', I mean 'Could this save us from being restrained and nannied for eternity?'

Comment author: RolfAndreassen 15 October 2011 07:19:22PM 2 points [-]

I shudder to think that any AI's final goals could be so balanced that random articles on the Web of a Thousand Lies could push it one way or the other. I'm of the opinion that this is a fail, to be avoided at all costs.