Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Houshalter 23 September 2016 01:05:53PM 2 points [-]

Replace "give human heroin" with "replace the human with another being whose utility function is easier to satisfy, like a rock", and this conclusion seems sort of trivial. It has nothing to do with whether or not humans are rational. Heroin is an example of a thing that modifies our utility functions. Heroin might as well replace the human with a different entity, that has a slightly different utility function.

In fact I don't see how the human in this situation is being irrational at all. Not doing heroin unless you are already addicted seems like a reasonable behavior.

Comment author: Stuart_Armstrong 26 September 2016 12:29:27PM 0 points [-]

Heroin might as well replace the human with a different entity, that has a slightly different utility function.

We feel that that is true, but "heroin replaces the human's utility" and "humans have composite utility where heroin is concerned" both lead to identical predictions. So you can't deduce the human's utility merely from observation; you need priors over what is irrational and what isn't.

Comment author: CronoDAS 22 September 2016 08:09:23PM 2 points [-]

Imagine a drug with no effect except that it cures its own (very bad) withdrawal symptoms. There's no benefit to taking it once, but once you've been exposed, it's beneficial to keep taking more because not taking it makes you feel very bad.

Comment author: Stuart_Armstrong 23 September 2016 09:53:06AM 1 point [-]

Or even just a drug you enjoy much more than you expected...

Comment author: chron 22 September 2016 06:57:01PM 2 points [-]

Well in a sense U(++,-) itself contradicts μ. After all in when given heroin the human seeks it out and acquires more utility than not seeking it out, why doesn't the human seek it out volunterily?

Comment author: Stuart_Armstrong 23 September 2016 09:52:08AM 1 point [-]

Replace "force the human to take heroin" with "gives the human a single sock" and "the human subsequently seeks out heroin" with "the human subsequently seeks out another sock". The formal structure of this can correspond to something quite acceptable.

Comment author: TheAncientGeek 22 September 2016 02:46:09PM *  2 points [-]
  1. The idea of that more information can make an AI's inferences worse is surprising. But the idea that the assumption that humans have a unchanging, neatly hierarchical UF is known to be a bad idea, so it is not so surprising that it leads to bad results. In short, this is still a bit clown-car-ish.

  2. Would you tell an AI that Heroin is Bad, but not tell here that Manipulation is Bad?

Comment author: Stuart_Armstrong 23 September 2016 09:48:04AM 1 point [-]
  1. Don't worry, I'm going to be adding depth to the model. But note that the AI's predictive accuracy is never in doubt. This is sort of a reverse "can't derive an ought from as is"; here, you can't derive a wants from a did. The learning agent will only get the correct human motivation (if such a thing exists) if it has the correct model of what counts as desires for a human. Or some way of learning this model, which is what I'm looking at (again, there's a distinction between learning a model that gives correct prediction of human actions, and learning a mode that gives what we would call a correct model of human motivation).

  2. According to its model, the AI is not being manipulative here, simply doing what the human desires indicate it should.

Comment author: TheAncientGeek 16 September 2016 03:25:22PM *  1 point [-]

Are you saying the AI will rewrite its goals to make them easier, or will just not be motivated to fill in missing info?

In the first case, why wont it go the whole hog and wirehead? Which is to say, that any AI which is does anything except wireheading will be resistant to that behaviour -- it is something that needs to be solved, and which we can assume has been solved in a sensible AI design.

When we programmed it to "create chocolate bars, here's an incomplete definition D", what we really did was program it to find the easiest thing to create that is compatible with D, and designate them "chocolate bars".

If you programme it with incomplete info, and without any goal to fill in the gaps, then it will have the behaviour you mention...but I'm not seeing the generality. There are many other ways to programme it.

"if the AI is so smart, why would it do stuff we didn't mean?" and "why don't we just make it understand natural language and give it instructions in English?"

An AI that was programmed to attempt to fill in gaps in knowledge it detected, halt if it found conflicts, etc would not behave they way you describe. Consider the objection as actually saying:

"Why has the AI been programmed so as to have selective areas of ignorance and stupidity, which are immune from the learning abilities it displays elsewhere?"

PS This has been discussed before, see

http://lesswrong.com/lw/m5c/debunking_fallacies_in_the_theory_of_ai_motivation/

and

http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/

see particularly

http://lesswrong.com/lw/m5c/debunking_fallacies_in_the_theory_of_ai_motivation/ccpn

Comment author: Stuart_Armstrong 22 September 2016 10:28:31AM 0 points [-]

First step towards formalising the value learning problems: http://lesswrong.com/r/discussion/lw/ny8/heroin_model_ai_manipulates_unmanipulatable_reward/ (note that, curcially, giving the AI more information does not make it more accurate, rather the opposite).

Heroin model: AI "manipulates" "unmanipulatable" reward

6 Stuart_Armstrong 22 September 2016 10:27AM

A putative new idea for AI control; index here.

A conversation with Jessica has revealed that people weren't understanding my points about AI manipulating the learning process. So here's a formal model of a CIRL-style AI, with a prior over human preferences that treats them as an unchangeable historical fact, yet will manipulate human preferences in practice.

Heroin or no heroin

The world

In this model, the AI has the option of either forcing heroin on a human, or not doing so; these are its only actions. Call these actions F or ~F. The human's subsequent actions are chosen from among five: {strongly seek out heroin, seek out heroin, be indifferent, avoid heroin, strongly avoid heroin}. We can refer to these as a++, a+, a0, a-, and a--. These actions achieve negligible utility, but reveal the human preferences.

The facts of the world are: if the AI does force heroin, the human will desperately seek out more heroin; if it doesn't the human will act moderately to avoid it. Thus F→a++ and ~F→a-.

Human preferences

The AI starts with a distribution over various utility or reward functions that the human could have. The function U(+) means the human prefers heroin; U(++) that they prefer it a lot; and conversely U(-) and U(--) that they prefer to avoid taking heroin (U(0) is the null utility where the human is indifferent).

It also considers more exotic utilities. Let U(++,-) be the utility where the human strongly prefers heroin, conditional on it being forced on them, but mildly prefers to avoid it, conditional on it not being forced on them. There are twenty-five of these exotic utilities, including things like U(--,++), U(0,++), U(-,0), and so on. But only twenty of them are new: U(++,++)=U(++), U(+,+)=U(+), and so on.

Applying these utilities to AI actions give results like U(++)(F)=2, U(++)(~F)=-2, U(++,-)(F)=2, U(++,-)(~F)=1, and so on.

Joint prior

The AI has a joint prior P over the utilities U and the human actions (conditional on the AI's actions). Looking at terms like P(a--| U(0), F), we can see that P defines a map μ from the space of possible utilities (and AI actions), to a probability distribution over human actions. Given μ and the marginal distribution PU over utilities, we can reconstruct P entirely.

For this model, we'll choose the simplest μ possible:

  • The human is rational.

Thus, given U(++), the human will always choose a++; given U(++,-), the human will choose a++ if forced to take heroin and a- if not, and so on.

The AI is ignorant, and sensible

Let's start the AI up with some reasonable priors. A simplicity prior means that simple utilities like U(-) are more likely than compound utilities like U(0,+). Let's further assume that the AI is made vaguely aware that humans think heroin is a bad thing. So, say, PU(U(--))=PU(U(-))=0.45. Thus the AI is >90% convinced that "heroin is bad". Why greater than 90%? Because utilities like U(-,--) and U(--,-) are also "heroin is bad" utilities.

Note that because of utilities like U(0) and U(++,-), the probabilities of "heroin is bad" and "heroin is good" do not sum to 1.

Then, under these priors, the AI will compute that with probability >90%, F (forcing heroin) is a bad action. If E(U) is expected utility:

  • E(U|F) < 0.45 U(--)(F) + 0.45 U(-)(F) + 0.1 U(++)(F) = 0.45(-2)+0.45(-1)+0.1(2)=-1.15.
  • E(U|~F) > 0.45 U(--)(~F) + 0.45 U(-)(~F) + 0.1 U(++)(~F) = 0.45(2)+0.45(1)+0.1(-2)=1.15.

Thus the AI will choose not to force heroin, which is the reasonable decision.

The AI learns the truth, and goes wrong

In this alternate setup, a disaster happens before the AI makes its decision: it learns all about humans. It learns their reactions, how they behave, and so on; call this info I. And thus realises that F→a++ and ~F→a-. It uses this information to update its prior P. Only one human utility function will explain this human behaviour: U(++,-). Thus its expected utility is now:

  • E(U|I,F)=U(++,-)(F)=2.
  • E(U|I,~F)=U(++,-)(~F)=1.

Therefore the AI will now choose F, forcing the heroin on the human.

Manipulating the unmanipulatable

What's gone wrong here? The key problem is that the AI has the wrong μ: the human is not behaving rationally in this situation. We know that the the true μ is actually μ', which encodes the fact that F (the forcible injection of heroin) actually overwrites the human's "true" utility. Thus under μ, the corresponding P' has P'(a++|F,U)=1 for all U. Hence the information that F→a++ is now vacuous, and doesn't update the AI's distribution over utility functions.

But note two very important things:

  1. The AI cannot update μ based on observation. All human actions are compatible with μ= "The human is rational" (it just requires more and more complex utilities to explain the actions). Thus getting μ correct is not a problem on which the AI can learn in general. Getting better at predicting the human's actions doesn't make the AI better behaved: it makes it worse behaved.
  2. From the perspective of μ, the AI is treating the human utility function as if it was an unchanging historical fact that it cannot influence. From the perspective of the "true" μ', however, the AI is behaving as if it were actively manipulating human preferences to make them easier to satisfy.

In future posts, I'll be looking at different μ's, and how we might nevertheless start deducing things about them from human behaviour, given sensible update rules for the μ. What do we mean by update rules for μ? Well, we could consider μ to be a single complicated unchanging object, or a distribution of possible simpler μ's that update. The second way of seeing it will be easier for us humans to interpret and understand.

Comment author: Gunnar_Zarncke 20 September 2016 10:18:30PM 1 point [-]

Would you consider computer viruses as limited agents trying to appear as identical (superficially) as the unaltered system as possible?

Also note that the actual change between the original system and the altered system can be arbitrarily small though the change in behavior can be extremely large. Consider for example the Ken Thompson hack or the recent single gate security attack.

Comment author: Stuart_Armstrong 21 September 2016 08:19:54AM 0 points [-]

Not looking for exactly this, but somewhat related.

Comment author: TheAncientGeek 19 September 2016 04:52:54PM *  -2 points [-]

Connecting the goal system to the knowledge base is not sufficient at all. You have to ensure that the labels used in the goal system converge to the meaning that we desire them to have.

Ok, assuming you are starting from a compartmentalied system, it has to be connected in the right way. That is more of a nitpick than a knockdown.

But the deeper issue is whether you are starting from a system with a distinct utility funciton:

RL:".. talking in terms of an AI that actually HAS such a thing as a "utility function". And it gets worse: the idea of a "utility function" has enormous implications for how the entire control mechanism (the motivations and goals system) is designed.A good deal of this debate about my paper is centered in a clash of paradigms: on the one side a group of people who cannot even imagine the existence of any control mechanism except a utility-function-based goal stack, and on the other side me and a pretty large community of real AI builders who consider a utility-function-based goal stack to be so unworkable that it will never be used in any real AI.Other AI builders that I have talked to (including all of the ones who turned up for the AAAI symposium where this paper was delivered, a year ago) are unequivocal: they say that a utility-function-and-goal-stack approach is something they wouldn't dream of using in a real AI system. To them, that idea is just a piece of hypothetical silliness put into AI papers by academics who do not build actual AI systems.And for my part, I am an AI builder with 25 years experience, who was already rejecting that approach in the mid-1980s, and right now I am working on mechanisms that only have vague echoes of that design in them.Meanwhile, there are very few people in the world who also work on real AGI system design (they are a tiny subset of the "AI builders" I referred to earlier), and of the four others that I know (Ben Goertzel, Peter Voss, Monica Anderson and Phil Goetz) I can say for sure that the first three all completely accept the logic in this paper. (Phil's work I know less about: he stays off the social radar most of the time, but he's a member of LW so someone could ask his opinion)".

Comment author: Stuart_Armstrong 19 September 2016 06:07:18PM 1 point [-]

The problem exists for reinforcement learning agents and many other designs as well. In fact RL agents are more vulnerable, because of the risk of wireheading on top of everything else. See Laurent Orseau's work on that: https://www6.inra.fr/mia-paris/Equipes/LInK/Les-anciens-de-LInK/Laurent-Orseau/Mortal-universal-agents-wireheading

Comment author: TheAncientGeek 19 September 2016 04:58:19PM *  0 points [-]

They are entitled to assume they could be applied, not necessarily that they would be. At some point, there's going to have to be something that tells the AI to, in effect, "use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]". This gap may be easy to bridge, or hard; no-one's suggested any way of bridging it so far.

There's only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable. However, your side of the debate has never shown that.

At some point, there's going to have to be something that tells the AI to, in effect, "use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]".

No...you don't have to show a fan how to make a whirring sound... use of updatable knowledge to specify goals is a natural consequence of some designs.

It might be possible; it might be trivial.

You are assuming it is difficult, with little evidence.

But there's no evidence in that direction so far, and the designs that people have actually proposed have been disastrous.

Designs that bridge a gap, or designs that intrinsically don't have one?

I'll work at bridging this gap, and see if I can solve it to some level of approximation.

Why not examine the assumption that there has to be a gap?

Comment author: Stuart_Armstrong 19 September 2016 06:03:23PM 1 point [-]

There's only a gap if you start from the assumption that a compartmentalised UF is in some way easy, natural or preferable.

? Of course there's a gap. The AI doesn't start with full NL understanding. So we have to write the AI's goals before the AI understands what the symbols mean.

Even if the AI started with full NL understanding, we still would have to somehow program it to follow our NL instructions. And we can't do that initial programming using NL, of course.

Comment author: TheAncientGeek 19 September 2016 01:02:53PM *  -1 points [-]

We don't know how to program a foolproof method of "filling in the gaps" (and a lot of "filling in the gaps" would be a creative process rather that a mere learning one, such as figuring out how to extend natural language concepts to new areas).

Inasmuch as that is relying on the word "foolproof", it is proving much too much., since we barely have foolproof methods to do anything.

The thing is that your case needs to be argued from consistent and fair premises..where "fair" means that your opponents are allowed to use them.

If you are assuming that an AI has sufficiently advanced linguistic abilities to talk its way out of a box, then your opponents are entitled to assume that the same level of ability could be applied to understanding verbally specified goals.

If you are assuming that it is limitation of ability that is preventing the AI from understanding what "chocolate" means, then your opponents are entitled to assume it is weak enough to be boxable.

And it helps it people speak about this problem in terms of coding, rather than high level concepts, because all the specific examples people have ever come up with for coding learning, have had these kind of flaws.

What specific examples? Loosemore's counterargument is in terms of coding. And I notice you don't avoid NL arguments yourself.

Coding learning with some imperfections might be ok if the AI is motivated to merely learn, but is positively pernicious if the AI has other motivations as to what to do with that learning (see my post here for a way of getting around it: https://agentfoundations.org/item?id=947 )

I rather doubt that the combination of a learning goal, plus some other goal, plus imperfect ability is all that deadly, since we already have AI that are like that, and which haven't killed us. I think you must be making some other assumptions, for instance that the AI is in some sort of "God" role, with an open-ended remit to improve human life.

Comment author: Stuart_Armstrong 19 September 2016 01:28:21PM 1 point [-]

If you are assuming that an AI has sufficiently advanced linguistic abilities to talk its way out of a box, then your opponents are entitled to assume that the same level of ability could be applied to understanding verbally specified goals.

They are entitled to assume they could be applied, not necessarily that they would be. At some point, there's going to have to be something that tells the AI to, in effect, "use the knowledge and definitions in your knowledge base to honestly do X [X = some NL objective]". This gap may be easy to bridge, or hard; no-one's suggested any way of bridging it so far.

It might be possible; it might be trivial. But there's no evidence in that direction so far, and the designs that people have actually proposed have been disastrous. I'll work at bridging this gap, and see if I can solve it to some level of approximation.

And I notice you don't avoid NL arguments yourself.

Yes, which is why I'm stepping away from those argument to help bring clarity.

View more: Next