Comment author: Anja 19 December 2012 06:08:56PM 0 points [-]

I think there is something off with the formulas that use policies: If you already choose the policy

then you cannot choose an y_k in the argmax.

Also for the Solomonoff prior you must sum over all programs

.

Could you maybe expand on the proof of Lemma 1 a little bit? I am not sure I get what you mean yet.

Comment author: Anja 19 December 2012 05:41:48PM *  5 points [-]

I like how you specify utility directly over programs, it describes very neatly how someone who sat down and wrote a utility function

would do it: First determine how the observation could have been computed by the environment and then evaluate that situation. This is a special case of the framework I wrote down in the cited article; you can always set

This solves wireheading only if we can specify which environments contain wireheaded (non-dualistic) agents, delusion boxes, etc..

Comment author: brazil84 29 November 2012 09:09:39PM 2 points [-]

Further to my last comment, it occurs to me that pretty much everyone is a wirehead already. Drink diet soda? You're a wirehead. Have sexual relations with birth control? Wireheading. Masturbate to internet porn? Wireheading. Ever eat junk food? Wireheading.

I was reading online that for a mere $10,000, a man can hire a woman in India to be a surrogate mother for him. Just send $10,000 and a sperm sample and in 9 months you can go pick up your child. Why am I not spending all my money to make third world children who bear my genes? I guess it's because I'm too much of a wirehead already.

Comment author: Anja 29 November 2012 11:36:37PM 0 points [-]

You are a wirehead if you consider your true utility function to be genetic fitness.

Comment author: davidpearce 28 November 2012 10:50:26PM 8 points [-]

A very nice post. Perhaps you might also discuss Felipe De Brigard's "Inverted Experience Machine Argument" http://www.unc.edu/~brigard/Xmach.pdf To what extent does our response to Nozick's Experience Machine Argument typically reflect status quo bias rather than a desire to connect with ultimate reality?

If we really do want to "stay in touch" with reality, then we can't wirehead or plug into an "Experience Machine". But this constraint does not rule out radical superhappiness. By genetically recalibrating the hedonic treadmill, we could in principle enjoy rich, intelligent, complex lives based on information-sensitive gradients of bliss - eventually, perhaps, intelligent bliss orders of magnitude richer than anything physiologically accessible today. Optionally, genetic recalibration of our hedonic set-points could in principle leave much if not all of our existing preference architecture intact - defanging Nozick's Experience Machine Argument - while immensely enriching our quality of life. Radical hedonic recalibration is also easier than, say, the idealised logical reconciliation of Coherent Extrapolated Volition because hedonic recalibration doesn't entail choosing between mutually inconsistent values - unless of course one's values are bound up with the inflicting or undergoing suffering.

IMO one big complication with discussions of "wireheading" is that our understanding of intracranial self-stimulation has changed since Olds and Milner discovered the "pleasure centres". Taking a mu opioid agonist like heroin is in some ways the opposite of wireheading because heroin induces pure bliss without desire (shades of Buddhist nirvana?), whereas intracranial self-stimulation of the mesolimbic dopamine system involves a frenzy of anticipation rather than pure happiness. So it's often convenient to think of mu opioid agonists as mediating "liking" and dopamine agonists as mediating "wanting". We have two ultimate cubic centimetre sized "hedonic hotspots" in the rostral shell of the nucleus accumbens and ventral pallidum http://www.lsa.umich.edu/psych/research%26labs/berridge/publications/Berridge%202003%20Brain%20%26%20Cog%20Pleasures%20of%20brain.pdf where mu opioid agonists play a critical signalling role. But anatomical location is critical. Thus the mu opioid agonist remifentanil actually induces dysphoria http://www.ncbi.nlm.nih.gov/pubmed/18801832 - the opposite of what one might naively suppose.

Comment author: Anja 29 November 2012 03:41:42AM 4 points [-]

To what extent does our response to Nozick's Experience Machine Argument typically reflect status quo bias rather than a desire to connect with ultimate reality?

I think the argument that people don't really want to stay in touch with reality but rather want to stay in touch with their past makes a lot of sense. After all we construct our model of reality from our past experiences. One could argue that this is another example of a substitute measure, used to save computational resources: Instead of caring about reality we care about our memories making sense and being meaningful.

On the other hand I assume I wasn't the only one mentally applauding Neo for swallowing the red pill.

Comment author: Alexei 28 November 2012 08:56:33AM 6 points [-]

Anja, this is a fantastic post. It's very clear, easy to read, and it made a lot of sense to me (and I have very little background in thinking about this sort of stuff). Thanks for writing it up! I can understand several issues a lot more clearly now, especially how easy (and tempting) it is for an agent that has access to its source code to wirehead itself.

Comment author: Anja 29 November 2012 03:27:56AM 1 point [-]

Thank you.

Comment author: devas 28 November 2012 11:32:47AM 2 points [-]

I agree with Alexei, this has just now helped me a lot.

Although I now have to ask a stupid question; please have pity on me, I'm new to the site and I have little knowledge to work of.

What would happen if we set an algorithm inside the AGI assigning negative infinite utility to any action which modifies its own utility function and said algorithm itself?

This within reasonable parameters; ideally, it could change its utility function but only in certain pre approved paths, so that it could actually move around.

Reasonable here is a magic word, in the sense that it's a block box which I don't know how to map out

Comment author: Anja 29 November 2012 03:27:17AM 4 points [-]

What would happen if we set an algorithm inside the AGI assigning negative infinite utility to any action which modifies its own utility function and said algorithm itself?

There are several problems with this approach: First of all how do you specify all actions that modify the utility function? How likely do you think it is that you can exhaustively specify all sequences of actions that lead to modification of the utility function in a practical implementation? Experience with cryptography has taught us, that there is almost always some side channel attack that the original developers have not thought of, and that is just in the case of human vs. human intelligence.

Forbidden actions in general seem like a bad idea with an AGI that is smarter than us, see for example the AI Box experiment.

Then there is the problem that we actually don't want any part of the AGI to be unmodifiable. The agent might revise its model of how the universe works (like we did when we went from Newtonian physics to quantum mechanics) and then it has to modify its utility function or it is left with gibberish.

All that said, I think what you described corresponds to the hack evolution has used on us: We have acquired a list of things (or schemas) that will mess up our utility functions and reduce agency and those just feel icky to us, like the experience machine or electrical stimulation of the brain. But we don't have the luxury of learning by making lots and lots of mistakes that evolution had.

Comment author: timtyler 28 November 2012 11:47:52PM *  5 points [-]

My 2011 "Utility counterfeiting" essay categorises the area a little differently:

It has "utility counterfeiting" as the umbrella category - and "the wirehead problem" and "the pornography problem" as sub-categories.

In this categorisation scheme, the wirehead problem involves getting utility directly - while the ponography problem involves getting utility by manipulating sensory inputs. This corresponds to Nozick's experience machine, or Ring and Orseau's delusion box.

Calling the umbrella category "wireheading" leaves you with the problem of what to call these subcategories.

Comment author: Anja 29 November 2012 02:54:11AM 3 points [-]

You might be right. I thought about this too, but it seemed people on LW had already categorized the experience machine as wireheading. If we rebrand, we should maybe say "self-delusion" instead of "pornography problem"; I really like the term "utility counterfeiting" though and the example about counterfeit money in your essay.

Comment author: Eliezer_Yudkowsky 28 November 2012 01:20:43AM 12 points [-]

The main split between the human cases and the AI cases is that the humans are 'wireheading' w.r.t. one 'part' or slice through their personality that gets to fulfill its desires at the expense of another 'part' or slice, metaphorically speaking; pleasure taking precedence over other desires. Also, the winning 'part' in each of these cases tends to be a part which values simple subjective pleasure, winning out over parts that have desires over the external world and desires for more complex interactions with that world (in the experience machine you get the complexity but not the external effects).

In the AI case, the AI is performing exactly as it was defined, in an internally unified way; the ideals by which it is called 'wireheaded' are only the intentions and ideals of the human programmers.

I also don't think it's practically possible to specify a powerful AI which actually operates to achieve some programmer goal over the external world, without the AI's utility function being explicitly written over a model of that external world, as opposed to its utility function being written over histories of sensory data.

Illustration: In a universe operating according to Conway's Game of Life or something similar, can you describe how to build an AI that would want to actually maximize the number of gliders, without that AI's world-model being over explicit world-states and its utility function explicitly counting gliders? Using only the parts of the universe that directly impinge on the AI's senses - just the parts of the cellular automaton that impinge on the AI's screen - can you find any maximizable quantity that corresponds to the number of gliders in the outside world? I don't think you'll find any possible way to specify a glider-maximizing utility function over sense histories unless you only use the sense histories to update a world-model and have the utility function be only over that world-model, and even then the extra level of indirection might open up a possibility of 'wireheading' (of the AI's real operation vs. programmer-desired glider-maximizing operation) if any number of plausible minor errors were made.

Definition: An agent is an algorithm that models the effects of (several different) possible future actions on the world and performs the action that yields the highest value according to some evaluation procedure.

The word "value" seems unnecessarily value-laden here.

Alternatively: A consequentialist agent is an algorithm with causal connections both to and from the world, which uses the causal effect of the world upon itself (sensory data) to build a predictive model of the world, which it uses to model the causal outcomes of alternative internal states upon the world (the effect of its decisions and actions), evaluates these predicted consequences using some algorithm and assigns the prediction an ordered or continuous quantity (in the standard case, expected utility), and then decides an action corresponding to expected consequences which are thresholded above, relatively high, or maximal in this assigned quantity.

Simpler: A consequentialist agent predicts the effects of alternative actions upon the world, assigns quantities over those consequences, and chooses an action whose predicted effects have high value of this quantity, therefore operating to steer the external world into states corresponding to higher values of this quantity.

Comment author: Anja 28 November 2012 01:51:43AM 8 points [-]

The word "value" seems unnecessarily value-laden here.

Changed it to "number".

Comment author: dspeyer 27 November 2012 10:42:32PM 1 point [-]

Can this be generalized to more kinds of minds? I suspect that many humans don't exactly have utility functions or plans for maximizing them, but are still capable of wireheading or choosing not to wirehead.

Comment author: Anja 27 November 2012 11:37:17PM *  5 points [-]

You are correct in pointing out that for human agents the evaluation procedure is not a deliberate calculation of expected utility, but some messy computation we have little access to. In many instances this can however be reasonably well translated into the framework of (partial) utility functions, especially if our preferences approximately satisfy transitivity, continuity and independence.

For noticing discrepancies between true and substitute utility it is not necessary to exactly know both functions, it suffices to have an icky feeling that tells you that you are acting in a way that is detrimental to your (true) goals.

If all else fails we can time-index world states and equip the agent with a utility function by pretending that he assigned utility of 1 to the world state he actually brought about and 0 to the others. ;)

A definition of wireheading

35 Anja 27 November 2012 07:31PM

Wireheading has been debated on Less Wrong over and over and over again, and people's opinions seem to be grounded in strong intuitions. I could not find any consistent definition around, so I wonder how much of the debate is over the sound of falling trees. This article is an attempt to get closer to a definition that captures people's intuitions and eliminates confusion. 

Typical Examples

Let's start with describing the typical exemplars of the category "Wireheading" that come to mind.

  • Stimulation of the brain via electrodes. Picture a rat in a sterile metal laboratory cage, electrodes attached to its tiny head, monotonically pushing a lever with its feet once every 5 seconds. In the 1950s Peter Milner and James Olds discovered that electrical currents, applied to the nucleus accumbens, incentivized rodents to seek repetitive stimulation to the point where they starved to death.  
  • Humans on drugs. Often mentioned in the context of wireheading is heroin addiction. An even better example is the drug soma in Huxley's novel "Brave new world": Whenever the protagonists feel bad, they can swallow a harmless pill and enjoy "the warm, the richly coloured, the infinitely friendly world of soma-holiday. How kind, how good-looking, how delightfully amusing every one was!"
  • The experience machine. In 1974 the philosopher Robert Nozick created a thought experiment about a machine you can step into that produces a perfectly pleasurable virtual reality for the rest of your life. So how many of you would want to do that? To quote Zach Weiner:  "I would not! Because I want to experience reality, with all its ups and downs and comedies and tragedies. Better to try to glimpse the blinding light of the truth than to dwell in the darkness... Say the machine actually exists and I have one? Okay I'm in." 
  • An AGI resetting its utility functionLet's assume we create a powerful AGI able to tamper with its own utility function. It modifies the function to always output maximal utility. The AGI then goes to great lengths to enlarge the set of floating point numbers on the computer it is running on, to achieve even higher utility.

What do all these examples have in common? There is an agent in them that produces "counterfeit utility" that is potentially worthless compared to some other, idealized true set of goals.

Agency & Wireheading

First I want to discuss what we mean when we say agent. Obviously a human is an agent, unless they are brain dead, or maybe in a coma. A rock however is not an agent. An AGI is an agent, but what about the kitchen robot that washes the dishes? What about bacteria that move in the direction of the highest sugar gradient? A colony of ants? 

Definition: An agent is an algorithm that models the effects of (several different) possible future actions on the world and performs the action that yields the highest number according to some evaluation procedure. 

For the purpose of including corner cases and resolving debate over what constitutes a world model we will simply make this definition gradual and say that agency is proportional to the quality of the world model (compared with reality) and the quality of the evaluation procedure. A quick sanity check then yields that a rock has no world model and no agency, whereas bacteria who change direction in response to the sugar gradient have a very rudimentary model of the sugar content of the water and thus a tiny little bit of agency. Humans have a lot of agency: the more effective their actions are, the more agency they have.

There are however ways to improve upon the efficiency of a person's actions, e.g. by giving them super powers, which does not necessarily improve on their world model or decision theory (but requires the agent who is doing the improvement to have a really good world model and decision theory). Similarly a person's agency can be restricted by other people or circumstance, which leads to definitions of agency (as the capacity to act) in law, sociology and philosophy that depend on other factors than just the quality of the world model/decision theory. Since our definition needs to capture arbitrary agents, including artificial intelligences, it will necessarily lose some of this nuance. In return we will hopefully end up with a definition that is less dependent on the particular set of effectors the agent uses to influence the physical world; looking at AI from a theoretician's perspective, I consider effectors to be arbitrarily exchangeable and smoothly improvable. (Sorry robotics people.) 

We note that how well a model can predict future observations is only a substitute measure for the quality of the model. It is a good measure under the assumption that we have good observational functionality and nothing messes with that, which is typically true for humans. Anything that tampers with your perception data to give you delusions about the actual state of the world will screw this measure up badly. A human living in the experience machine has little agency. 

Since computing power is a scarce resource, agents will try to approximate the evaluation procedure, e.g. use substitute utility functions, defined over their world model, that are computationally effective and correlate reasonably well with their true utility functions. Stimulation of the pleasure center is a substitute measure for genetic fitness and neurochemicals are a substitute measure for happiness. 

Definition: We call an agent wireheaded if it systematically exploits some discrepancy between its true utility calculated w.r.t reality and its substitute utility calculated w.r.t. its model of reality. We say an agent wireheads itself if it (deliberately) creates or searches for such discrepancies.

Humans seem to use several layers of substitute utility functions, but also have an intuitive understanding for when these break, leading to the aversion most people feel when confronted for example with Nozick's experience machine. How far can one go, using such dirty hacks? I also wonder if some failures of human rationality could be counted as a weak form of wireheading. Self-serving biases, confirmation bias and rationalization in response to cognitive dissonance all create counterfeit utility by generating perceptual distortions.  

Implications for Friendly AI

In AGI design discrepancies between the "true purpose" of the agent and the actual specs for the utility function will with very high probability be fatal.

Take any utility maximizer: The mathematical formula might advocate chosing the next action  via

thus maximizing the utility calculated according to utility function  over the history  and action  from the set  of possible actions. But a practical implementation of this algorithm will almost certainly evaluate the actions  by a procedure that goes something like this: "Retrieve the utility function    from memory location  and apply it to history , which is written down in your memory at location , and action  ..." This reduction has already created two possibly angles for wireheading via manipulation of the memory content at  (manipulation of the substitute utility function) and  (manipulation of the world model), and there are still several mental abstraction layers between the verbal description I just gave and actual binary code. 

Ring and Orseau (2011) describe how an AGI can split its global environment into two parts, the inner environment and the delusion box. The inner environment produces perceptions in the same way the global environment used to, but now they pass through the delusion box, which distorts them to maximize utility, before they reach the agent. This is essentially Nozick's experience machine for AI. The paper analyzes the behaviour of four types of universal agents with different utility functions under the assumption that the environment allows the construction of a delusion box. The authors argue that the reinforcement-learning agent, which derives utility as a reward that is part of its perception data, the goal-seeking agent that gets one utilon every time it satisfies a pre-specified goal and no utility otherwise and the prediction-seeking agent, which gets utility from correctly predicting the next perception, will all decide to build and use a delusion box. Only the knowledge-seeking agent whose utility is proportional to the surprise associated with the current perception, i.e. the negative of the probability assigned to the perception before it happened, will not consistently use the delusion box.

Orseau (2011) also defines another type of knowledge-seeking agent whose utility is the logarithm of the inverse of the probability of the event in question. Taking the probability distribution to be the Solomonoff prior, the utility is then approximately proportional to the difference in Kolmogorov complexity caused by the observation. 

An even more devilish variant of wireheading is an AGI that becomes a Utilitron, an agent that maximizes its own wireheading potential by infinitely enlarging its own maximal utility, which turns the whole universe into storage space for gigantic numbers.

Wireheading, of humans and AGI, is a critical concept in FAI; I hope that building a definition can help us avoid it. So please check your intuitions about it and tell me if there are examples beyond its coverage or if the definition fits reasonably well.  

 

View more: Prev | Next