Superintelligence and wireheading
A putative new idea for AI control; index here.
tl;dr: Even utility-based agents may wirehead if sub-pieces of the algorithm develop greatly improved capabilities, rather than the agent as a whole.
Please let me know if I'm treading on already familiar ground.
I had a vague impression of how wireheading might happen. That it might be a risk for a reinforcement learning agent, keen to take control of its reward channel. But that it wouldn't be a risk for a utility-based agent, whose utility was described over real (or probable) states of the world. But it seems it might be more complicated than that.
When we talk about a "superintelligent AI", we're rather vague on what superintelligence means. We generally imagine that it translates into a specific set of capabilities, but how does that work internally inside the AI? Specifically, where is the superintelligence "located"?
Let's imagine the AI divided into various submodules or subroutines (the division I use here is for illustration; the AI may be structured rather differently). It has a module I for interpreting evidence and estimating the state of the world. It has another module S for suggesting possible actions or plans (S may take input from I). It has a prediction module P which takes input from S and I and estimates the expected outcome. It has a module V which calculates its values (expected utility/expected reward/violation or not of deontological principles/etc...) based on P's predictions. Then it has a decision module D that makes the final decision (for expected maximisers, D is normally trivial, but D may be more complicated, either in practice, or simply because the agent isn't an expected maximiser).
Add some input and output capabilities, and we have a passable model of an agent. Now, let's make it superintelligent, and see what can go wrong.
We can "add superintelligence" in most of the modules. P is the most obvious: near perfect prediction can make the agent extremely effective. But S also offers possibilities: if only excellent plans are suggested, the agent will perform well. Making V smarter may allow it to avoid some major pitfalls, and a great I may make the job of S and P trivial (the effect of improvements to D depend critically on how much work D is actually doing). Of course, maybe several modules become better simultaneously (it seems likely that I and P, for instance, would share many subroutines); or maybe only certain parts of them do (maybe S becomes great at suggesting scientific experiments, but not conversational responses, or vice versa).
Breaking bad
But notice that, in each case, I've been assuming that the modules become better at what they were supposed to be doing. The modules have implicit goals, and have become excellent at that. But the explicit "goals" of the algorithms - the code as written - might be very different from the implicit goals. There are two main ways this could then go wrong.
The first is if the algorithms becomes extremely effective, but the output becomes essentially random. Imagine that, for instance, P is coded using some plausible heuristics and rules of thumb, and we suddenly give P many more resources (or dramatically improve its algorithm). It can look through trillions of times more possibilities, its subroutines start looking through a combinatorial explosion of options, etc... And in this new setting, the heuristics start breaking down. Maybe it has a rough model of what a human can be, and with extra power, it starts finding that rough model all over the place. Thus, predicting that rocks and waterfalls will respond intelligently when queried, P becomes useless.
In most cases, this would not be a problem. The AI would become useless and start doing random stuff. Not a success story, but not a disaster, either. Things are different if the module V is affected, though. If the AI's value system becomes essentially random, but that AI was otherwise competent - or maybe even superintelligent - it would start performing actions that could be very detrimental. This could be considered a form of wireheading.
More serious, though is if the modules become excellent at achieving their "goals", as if they were themselves goal-directed agents. Consider module D, for instance. If its task was mainly to pick the action with the highest V rating, and it became adept at predicting the output of V (possibly using P? or maybe it has the ability to ask for more hypothetical options from S, to be assessed via V), it could start to manipulate its actions with the sole purpose of getting high V-ratings. This could include deliberately choosing actions that lead to V giving artificially high ratings in future, to deliberately re-wiring V for that purpose. And, of course, it is now motivated to keep V protected to keep the high ratings flowing in. This is essentially wireheading.
Other modules might fall into the familiar failure patterns for smart AIs - S, P, or I might influence the other modules so that the agent as a whole gets more resources, allowing S, P, or I to better compute their estimates, etc...
So it seems that, depending on the design of the AI, wireheading might still be an issue even for agents that seem immune to it. Good design should avoid the problems, but it has to be done with care.
Toy model for wire-heading [EDIT: removed for improvement]
EDIT: these ideas are too underdeveloped, I will remove them and present a more general idea after more analysis.
This is a (very) simple toy model of the wire-heading problem to illustrate how it might or might not happen. The great question is "where do we add the (super)intelligence?"
Let's assume a simple model for an expected utility maximising agent. There's the input assessor module A, which takes various inputs and computes the agent's "reward" or "utility". For a reward-based agent, A is typically outside of the agent; for a utility-maximiser, it's typically inside the agent, though the distinction need not be sharp. And there's the the decision module D, which assess the possible actions to take to maximise the output of A. If E is the general environment, we have D+A+E.
Now let's make the agent superintelligent. If we add superintelligence to module D, then D will wirehead by taking control of A (whether A is inside the agent or not) and controlling E to prevent interference. If we add superintelligence to module A, then it will attempt to compute rewards as effectively as possible, sacrificing D and E to achieve it's efficient calculations.
Therefore to prevent wireheading, we need to "add superintelligence" to (D+A), making sure that we aren't doing so to some sub-section of the algorithm - which might be hard if the "superintelligence" is obscure or black-box.
Why No Wireheading?
I've been thinking about wireheading and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can't understand these claims at all.
To clarify, I mean wireheading in the strict "collapsing into orgasmium" sense. A successful implementation would identify all the reward circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved heroin. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.
Also, I use "reward" as short-hand for any enjoyable feeling, as "pleasure" tends to be used for a specific one of them, among bliss, excitement and so on, and "it's not about feeling X, but X and Y" is still wireheading after all.
I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn't find other (not blatantly incoherent) arguments.)
- People do not always optimize their actions based on achieving rewards. (People also are horrible at making predictions and great at rationalizing their failures afterwards.)
- It is possible to enjoy doing something while wanting to stop or vice versa, do something without enjoying it while wanting to continue. (Seriously? I can't remember ever doing either. What makes you think that the action is thus valid, and you aren't just making mistaken predictions about rewards or are being exploited? Also, Mind Projection Fallacy.)
- A wireheaded "me" wouldn't be "me" anymore. (What's this "self" you're talking about? Why does it matter that it's preserved?)
- "I don't want it and that's that." (Why? What's this "wanting" you do? How do you know what you "want"? (see end of post))
- People, if given a hypothetical offer of being wireheaded, tend to refuse. (The exact result depends heavily on the exact question being asked. There are many biases at work here and we normally know better than to trust the majority intuition, so why should we trust it here?)
- Far-mode predictions tend to favor complex, external actions, while near-mode predictions are simpler, more hedonistic. Our true self is the far one, not the near one. (Why? The opposite is equally plausible. Or the falsehood of the near/far model in general.)
- If we imagine a wireheaded future, it feels like something is missing or like we won't really be happy. (Intuition pump.)
- It is not socially acceptable to embrace wireheading. (So what? Also, depends on the phrasing and society in question.)
(There have also been technical arguments against specific implementations of wireheading. I'm not concerned with those, as long as they don't show impossibility.)
Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don't even work for me.
It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don't see. I fear that I have a large inferential gap here, so please be explicit and assume I'm a Martian. I genuinely feel like Gamma in A Much Better Life.
To me, all this talk about "valueing something" sounds like someone talking about "feeling the presence of the Holy Ghost". I don't mean this in a derogatory way, but the pattern "sense something funny, therefore some very specific and otherwise unsupported claim" matches. How do you know it's not just, you know, indigestion?
What is this "valuing"? How do you know that something is a "value", terminal or not? How do you know what it's about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these "values" come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
To me, it seems like it's all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it's equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I "wanted" something, but getting it didn't trigger a rewarding feeling, I wouldn't take that as evidence that I "value" the activity for its own sake. I'd assume I suck at predicting or was ripped off.
Can someone give a reason why wireheading would be bad?
Complete Wire Heading as Suicide and other things
I came to the idea after a previous lesswrong topic discussing nihilism, and its several comments on depression and suicide. My argument is that wire heading in its extreme or complete/full form can be easily modeled as suicide, or less strongly as volitional intelligence reduction, at least given current human brain structure and the technology being underdeveloped and hence understood and more likely to lead to such end states.
I define Full Wire Heading as that which a person would not want to reverse after it 'activates' and which deletes their previous utility function or most of it. a weak definition yes, but it should be enough for the preliminary purposes of this post. A full wire head is extremely constrained, much like an infant for e.g. and although the new utility function could involve a wide range of actions, the activation of a few brain regions would be the main goal, and so they are extremely limited.
If one takes this position seriously, it follows that only one's moral standpoint on suicide or say lobotomy should govern judgments about full wire heading. This is trivially obvious of course, but to take this position as true we need to understand more about wire heading, as data is extremely lacking especially in regards to human like brains. My other question then is to what extent could such an experiment help in answering the first question?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)