AlexMennen comments on Proper value learning through indifference - Less Wrong

16 Post author: Stuart_Armstrong 19 June 2014 09:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread.

Comment author: AlexMennen 19 June 2014 09:59:14PM 4 points [-]

Problem: not only will such an AI not resist its utility function being altered by you, it will also not resist its utility function being altered by a saboteur or by accident. I don't think I'd want to call this proposal a form of value learning, since it does not involve the AI trying to learn values, and instead just makes the AI hold still while values are force-fed to it.

Comment author: Stuart_Armstrong 20 June 2014 09:38:23AM *  3 points [-]

The AI will not resist its values being changed in the particular way that is specified in to trigger a U transition. It will resist other changes of value.

Comment author: AlexMennen 20 June 2014 09:09:29PM 1 point [-]

That's true; it will resist changes to its "outer" utility function U. But it won't resist changes to its "inner" utility function v, which still leaves a lot of flexibility, even though that isn't its true utility function in the VNM sense. That restriction isn't strong enough to avoid the problem I pointed out above.

Comment author: Stuart_Armstrong 21 June 2014 05:41:23AM 2 points [-]

I will only allow v to change if that change will trigger the "U adaptation" (the adding and subtracting of constants). You have to specify what processes count as U adaptations (certain types of conversations with certain people, eg) and what doesn't.

Comment author: AlexMennen 21 June 2014 04:05:42PM 1 point [-]

Oh, I see. So the AI simply losing the memory that v was stored in and replacing it with random noise shoudn't count as something it will be indifferent about? How would you formalize this such that arbitrary changes to v don't trigger the indifference?

Comment author: Stuart_Armstrong 22 June 2014 08:47:51PM 1 point [-]

By specifying what counts as an allowed change in U, and making the agent in to a U maximiser. Then, just as standard maximises defend their utilities, it should defend U(un clubbing the update, and only that update)

Comment author: [deleted] 23 June 2014 01:43:45PM 1 point [-]

You can't always solve human problems with AI design.

Comment author: AlexMennen 23 June 2014 09:38:23PM 1 point [-]

I'm not sure what you mean. The problem I was complaining about is an AI design problem, not a human problem.

Comment author: [deleted] 24 June 2014 05:44:19AM 2 points [-]

No, I would say that if you start entering false utility data into the AI and it believes you, because after all it was programmed to be indifferent to new utility data, that's your problem.

Comment author: AlexMennen 24 June 2014 06:00:48AM 2 points [-]

If the AI's utility function changes randomly for no apparent reason because the AI has litterally zero incentive to make sure that doesn't happen, then you have an AI design problem.

Comment author: [deleted] 24 June 2014 08:10:55AM 2 points [-]

It didn't change for no reason. It changed because someone fed new data into the AI's utility-learning algorithm which made it change. Don't give people root access if you don't want them using it!

Comment author: AlexMennen 24 June 2014 05:06:59PM *  1 point [-]

Being changed by an attacker is only one of the scenarios I was suggesting. And even then, presumably you would want the AI to help prevent them from hacking its utility function if they aren't supposed to have root access, but it won't.

Anyway, that problem is just a little bit stupid. But you can also get really stupid problems, like the AI wants more memory, so it replaces its utility function with something more compressible so that it can scavange from the memory where its utility function was stored.