by [anonymous]
3 min read18th Jun 2011114 comments

26

I've been thinking about wireheading and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can't understand these claims at all.

To clarify, I mean wireheading in the strict "collapsing into orgasmium" sense. A successful implementation would identify all the reward circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved heroin. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.

Also, I use "reward" as short-hand for any enjoyable feeling, as "pleasure" tends to be used for a specific one of them, among bliss, excitement and so on, and "it's not about feeling X, but X and Y" is still wireheading after all.

I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn't find other (not blatantly incoherent) arguments.)

  1. People do not always optimize their actions based on achieving rewards. (People also are horrible at making predictions and great at rationalizing their failures afterwards.)
  2. It is possible to enjoy doing something while wanting to stop or vice versa, do something without enjoying it while wanting to continue. (Seriously? I can't remember ever doing either. What makes you think that the action is thus valid, and you aren't just making mistaken predictions about rewards or are being exploited? Also, Mind Projection Fallacy.)
  3. A wireheaded "me" wouldn't be "me" anymore. (What's this "self" you're talking about? Why does it matter that it's preserved?)
  4. "I don't want it and that's that." (Why? What's this "wanting" you do? How do you know what you "want"? (see end of post))
  5. People, if given a hypothetical offer of being wireheaded, tend to refuse. (The exact result depends heavily on the exact question being asked. There are many biases at work here and we normally know better than to trust the majority intuition, so why should we trust it here?)
  6. Far-mode predictions tend to favor complex, external actions, while near-mode predictions are simpler, more hedonistic. Our true self is the far one, not the near one. (Why? The opposite is equally plausible. Or the falsehood of the near/far model in general.)
  7. If we imagine a wireheaded future, it feels like something is missing or like we won't really be happy. (Intuition pump.)
  8. It is not socially acceptable to embrace wireheading. (So what? Also, depends on the phrasing and society in question.)

(There have also been technical arguments against specific implementations of wireheading. I'm not concerned with those, as long as they don't show impossibility.)

Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don't even work for me.

It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don't see. I fear that I have a large inferential gap here, so please be explicit and assume I'm a Martian. I genuinely feel like Gamma in A Much Better Life.

To me, all this talk about "valueing something" sounds like someone talking about "feeling the presence of the Holy Ghost". I don't mean this in a derogatory way, but the pattern "sense something funny, therefore some very specific and otherwise unsupported claim" matches. How do you know it's not just, you know, indigestion?

What is this "valuing"? How do you know that something is a "value", terminal or not? How do you know what it's about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these "values" come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.

To me, it seems like it's all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it's equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I "wanted" something, but getting it didn't trigger a rewarding feeling, I wouldn't take that as evidence that I "value" the activity for its own sake. I'd assume I suck at predicting or was ripped off.

Can someone give a reason why wireheading would be bad?

New Comment
114 comments, sorted by Click to highlight new comments since: Today at 8:47 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.

But aren't you just setting up a system that values states of the world based on the feelings they contain? How does that make any more sense?

You're arguing as though neurological reward maximization is the obvious goal to fall back to if other goals aren't specified coherently. But people have filled in that blank with all sorts of things. "Nothing matters, so let's do X" goes in all sorts of zany directions.

2[anonymous]13y
I'm not. My thought process isn't "there aren't any real values, so let's go with rewards"; it's not intended as a hack to fix value nihilism. Rewards already do matter. It describes people's behavior well (see PCT) and makes introspective sense. I can actually feel projected and real rewards come up and how decisions arise based on that. I don't know how "I value that there are many sentients" or any other external referent could come up. It would still be judged on the emotional reaction it causes (but not always in a fully conscious manner). I think I can imagine agents that actually care about external referents and that wouldn't wirehead. I just don't think humans are such agents and I don't see evidence to the contrary. For example, many humans have no problem with "fake" experiences, like "railroaded, specifically crafted puzzles to stimulate learning" (e.g. Portal 2), "insights that feel profound, but don't mean anything" (e.g. entheogens) and so on. Pretty much the whole entertainment industry could be called wireheading lite. Acting based on the feelings one will experience is something that already happens, so optimizing for it is sensible. (Not-wireheaded utopias would also optimize them after all, just not only them.) A major problem I see with acting based on propositions about the world outside one's mind is that it would assign different value to states that one can't experimentally distinguish (successful mindless wallpaper vs. actual sentients, any decision after being memory-wiped, etc.). I can always tell if I'm wireheaded, however. I'd invoke Occam's Razor here and ignore any proposal that generates no anticipated experiences.
5Yasuo13y
I can't really pick apart your logic here, because there isn't any. This is like saying "buying cheese is something that already happens, so optimizing for it is sensible"
0[anonymous]13y
Not really. Let me try to clarify what I meant. We already know that rewards and punishments influence our actions. Any utopia would try to satisfy them. Even in a complex optimized universe full of un-wireheaded sentients caring about external referents, people would want to avoid pain, ... and experience lots of excitement, ... . Wireheading just says, that's all humans care about, so there's no need for all these constraints, let's pick the obvious shortcut. In support of this view, I gave the example of the entertainment industry that optimizes said experiences, but is completely fake (and trying to become more fake) and how many humans react positively to that. They don't complain that there's something missing, but rather enjoy those improved experiences more than the existent externally referenced alternatives. Also, take the reversed experience machine, in which the majority of students asked would stay plugged in. If they had complex preferences as typically cited against wireheading, wouldn't they have immediately rejected it? An expected paperclip maximizer would have left the machine right away. It can't build any paperclips there, so the machine has no value to it. But the reversed experience machine seems to have plenty of value for humans. This is essentially an outside view argument against complex preferences. What's the evidence that they actually exist? That people care about reality, about referents, all that? When presented with options that don't fulfill any of this, lots of people still seem to choose them.
0Yasuo13y
So, when people pick chocolate, it illustrates that that's what they truly desire, and when they pick vanilla, it just means that they're confused and really they like chocolate but they don't know it.
4Richard_Kennaway13y
Absolutely. Sudoku has been described as "a denial of service attack on human intellect", and see also the seventh quote here.
2Richard_Kennaway13y
PCT is not good to cite in this connection. PCT does not speak of rewards. According to PCT, behaviour is performed in order to control perceptions, i.e. to maintain those perceptions at their reference levels. While it is possible for a control system to be organised around maximising something labelled a reward (or minimising something labelled a penalty), that is just one particular class of possible ways of making a control system. Unless one has specifically observed that organisation, there are no grounds for concluding that reward is involved just because something is made of control systems.
0[anonymous]13y
Good point, I oversimplified here. I will consider this in more detail, but it naively, isn't this irrelevant in terms of wireheading? Maintaining perceptions is maybe a bit trickier to do, but there would still be obvious shortcuts. Maybe if these perceptions couldn't be simplified in any relevant way, then we'd need at least a full-on matrix and that would disqualify wireheading.

Is this just a case of the utility function not being up for grabs? muflax can't explain to me why wireheading counts as a win, and I can't explain to muflax why wireheading doesn't count as a win for me. At least, not using the language of rationality.

It might be interesting to get a neurological or evo-psych explanation for why non-wireheaders exist. But I don't think this is what's being asked here.

0[anonymous]13y
Well, ultimately it might be, but it really weirds me out. We're all running on essentially the same hardware. I don't think either those that find wireheading intuitive or those that don't are that non-neurotypical. I would expect that wireheading is right for either all or no humans and any other result needs a really good explanation. I'm not explicitly asking it, but I would be very interested in why it seems like there are two different kinds of minds, yes.
4Giles13y
This is just my opinion, not particularly evidence-based: I don't think that there are two different kinds of mind, or if there are it's not this issue that separates them. The wireheading scenario is one which is very alien to our ancestral environment so we may not have an "instinctive" preference for or against it. Rather, we have to extrapolate that preference from other things. Two heuristics which might be relevant: * where "wanting" and "liking" conflict, it feels like "wanting" is broken (i.e. we're making ourselves do things we don't enjoy). So given the opportunity we might want to update what we "want". This is pro-wireheading. * where we feel we are being manipulated, we want to fight that manipulation in case it's against our own interests. Thinking about brain probes is a sort of manipulation-superstimulus, so this heuristic would be anti-wireheading. I can very well believe that wireheading correlates with personality type, which is a weak form of your "two different minds" hypothesis. Sorry for the ultra-speculative nature of this post.
0[anonymous]13y
Makes sense in terms of explaining the different intuition, yes, and is essentially how I think about it. The second heuristic about manipulation, then, seems useful in practice (more agents will try to exploit us than satisfy us), but isn't it much weaker, considering the actual wireheading scenario? The first heuristic actually addresses the conflict (although maybe the wrong way), but the second just ignores it.
0Giles13y
I agree; the second heuristic doesn't apply particularly well to this scenario. Some terminal values seem to come from a part of the brain which isn't open to introspection, so I'd expect them to arise as a result of evolutionary kludges and random cultural influences rather than necessarily making any logical sense. The thing is, once we have a value system that's reasonably stable (i.e. what we want is the same as what we want to want) then we don't want to change our preferences even if we can't explain where they arise from.
0MrMind13y
As you know, we've already seen this statement, with "wireheading", with "increased complexity", etc. Until we get a definition of meta-value and their general axiological treatment, people will always be baffled that others have different meta-values then theirs.

Think about a paper-clip maximiser (people tend get silly about morality, and a lot less silly about paper-clips so its a useful thought experiment for meta-ethics in general). Its a simple design, it lists all the courses of action it could take, computes the expected_paper-clips given each one using its model of the world, and then takes the one that gives the largest result. It isn't interested in the question of why paper-clips are valuable, it just produces them.

So, does it value paper-clips, or does it just value expected paper-clips?

Consider how it reacts to the option "update your current model of the world to set Expected paper-clips = BB(1000)". This will appear on its list of possible actions, so what is its value?

(expected paperclips | "update your current model of the world to set Expected paper-clips = BB(1000)")

The answer is a lot less than BB(1000). Its current model of the world states that updating its model does not actually change reality (except insofar as the model is part of reality). Thus it does not predict that this action will result in the creation of any new paper-clips, so its expected paper-clips is roughly equal to the number of p... (read more)

4XiXiDu13y
The question is if humans, unlike paperclip maximizer's, are actually more concerned with maximizing their reward number irregardless of how it is being increased. If there is a way for humans to assign utility non-arbitrarily, then we are able to apply rational choice to our values, i.e. look for values that are better at yielding utility. If humans measure utility in a unit of bodily sensations, then we can ask what would most effectively yield the greatest amount of bodily sensations. Here wireheading seems to be more efficient than any other way to maximize bodily sensations, i.e. utility. There even is some evidence for this, e.g. humans enjoy fiction. Humans treat their model of reality as part of reality. If you can change the model, you can change reality. I don't agree with all that though, because I think that humans either are not utility maximizer's or assign utility arbitrarily.
5benelliott13y
It seems to me that I value both my internal world and the external world. I enjoy fiction, but the prospect of spending the rest of my life with nothing else fails to thrill me. A lot of people express scepticism of this claim, usually acting as if there is a great burden of proof required to show the external part is even possible. My point is that the external part is both possible and unsurprising. So my argument against wire heading goes; I don't feel like I want to be a wirehead, the vast majority of minds in general don't want to become wireheads, low prior + no evidence = "why has this even been promoted to my attention?"
2[anonymous]13y
That depends on the exact implementation. The paperclipper might be purely feedback-driven, essentially a paperclip-thermostat. In that case, it will simulate setting its internal variables to BB(1000), that will create huge positive feedback and it happily wireheads itself. Or it might simulate the state of the world, count the paperclips and then rate it, in which case it won't wirehead itself. The second option is much more complex and expensive. What makes you think humans are like that? I agree with you that there are non-wireheading agents in principle. I just don't see any evidence to conclude humans are like that.
2benelliott13y
The former is incredibly stupid, an agent that consistently gets its imagination confused with reality and cannot, even in principle, separate them would be utterly incapable of abstract thought. 'Expected Paper-clips' is completely different to paper-clips. If an agent can't tell the difference between them it may as well not be able to tell houses from dogs. The fact that I can even understand the difference suggests that I am not that stupid. Really? You can't see any Bayesian evidence at all! How about the fact that I claim not to want to wire head? My beliefs about my desires are surely correlated with my desires. How about all the other people who agree with me, including a lot of commenters on this site and most of humanity in general? Are our beliefs so astonishingly inaccurate that we are not even a tiny bit more likely to be right than wrong? What about the many cases of people strongly wanting things that did not make them happy and acting on those desires, or vice versa? You are privileging the hypothesis. Your view has a low prior (most of the matter in the universe is not part of my mind, so given that I might care about anything it is not very likely that I will care about one specific lump of meat?). You don't present any evidence of your own, and yet you demand that I present mine.
2[anonymous]13y
Welcome to evolution. Have you looked at humanity lately? (Ok, enough snide remarks. I do agree that this is fairly stupid design, but it would still work in many cases. The fact that it can't handle advanced neuroscience is unfortunate, but it worked really well in the Savannah.) (I strongly disagree that "most of humanity" is against wireheading. The only evidence for that are very flawed intuition pumps that can easily be reversed.) However, I do take your disagreement (and that of others here) seriously. It is a major reason why I don't just go endorse wireheading and why I wrote the post in the first place. Believe me, I'm listening. I'm sorry if I made the impression that I just discard your opinion as confused. It would have a low prior if human minds were pulled out of mind space at random. They aren't. We do know that they are reinforcement-based and we have good evolutionary pathways how complex minds based on that would be created. Reinforcement-based minds, however, are exactly like the first kind of mind I described and, it seems to me, should always wirehead if they can. As such, assuming no more, we should have no problem with wireheading. The fact that we do needs to be explained. Assuming there's an additional complex utility calculation would answer the question, but that's a fairly expensive hypothesis, which is why I asked for evidence. On the other hand, assuming (unconscious) signaling, mistaken introspection and so on relies only on mechanisms we already know exist and equally works, but favors wireheading. Economic models that do assume complex calculations like that, if I understand it correctly, work badly, while simpler models (PCT, behavioral economics in general) work much better. You are correct that I have not presented any evidence in favor of wireheading. I'm not endorsing wireheading and even though I think there are good arguments for it, I deliberately left them out. I'm not interested in "my pet theory about values is bett
2benelliott13y
What do you mean it can't handle advanced neuroscience? Who do you think invented neuroscience! One of the points I was trying to make was that humans can, in principle, separate out the two concepts, if they couldn't then we wouldn't even be having this conversation. Since we can separate these concepts, it seems like our final reflective equilibrium, whatever that looks like is perfectly capable of treating them differently. I think that wire-heading is a mistake that arose from the earlier mistake of failing to preserve use-mention distinction. Defending one mistake once we have already overcome its source is like trying to defend the content of Leviticus after admitting that God doesn't exist. I didn't actually think you were ignoring my opinion, I was just using a little bit of hyperbole, because people saying "I see no evidence" when there clearly is some evidence is a pet peeve of mine. This point interests me. Lets look a little deeper into this signalling hypothesis. Am I correct that you are claiming that while my concious mind utters sentences like "I don't want to be a wire-head" subconsciously I actually do want to be a wire-head? If this is the case, then the situation we have is two separate mental agents with conflicting preferences, you appear to be siding with Subconscious!Ben rather than Conscious!Ben on the grounds that he is the 'real Ben'. But in what sense is he more real, both of them exist as shown by their causal effect on the world? I may be biased on this issue but I would suggest you side with Conscious!Ben, he is the one with Qualia after all. Do you, in all honesty, want to be wire-headed? For the moment I'm not asking what you think you should want, what you want to want or what you think you would want in reflective equilibrium, just what you actually want. Does the prospect of being reduced to orgasmium, if you were offered it right now, seem more desirable than the prospect of a complicated universe filled with diverse being
1[anonymous]13y
Not that I wanna beat a dead horse here, but it took us ages. We can't even do basic arithmetic right without tons of tools. I'm always astonished to read history books and see how many really fundamental things weren't discovered for hundreds, if not thousands of years. So I'm fairly underwhelmed by the intellectual capacities of humans. But I see your point. Capable, sure. That seems like an overly general argument. The ability to distinguish things doesn't mean the distinction appears in the supposed utility function. I can tell apart hundreds of monospace fonts (don't ask), I don't expect monospace fonts to appear in my actual utility function as terminal values. I'm not sure how this helps either way. Not exactly like this. I don't think the unconscious part of the brain is conspiring against the conscious one. I don't think it's useful to clearly separate "conscious" and "unconscious" into two distinct agents. They are the same agent, only with conscious awareness shifting around, metaphorically like handing around a microphone in a crowd such that only one part can make itself heard for a while and then has to resort to affecting only its direct neighbors or screaming really loud. I don't think there's a direct conflict between agents here. Rather, the (current) conscious part encounters intentions and reactions it doesn't understand, doesn't know the origin or history of, and then tries to make sense of them, so it often starts confabulating. This is most easily seen in split-brain patients. I can clearly observe this by watching my own intentions and my reactions to them moment-to-moment. Intentions come out of nowhere, then directly afterwards (if I investigate) a reason is made up why I wanted this all along. Sometimes, this reason might be correct, but it's clearly a later interpolation. That's why I generally tend to ignore any verbal reasons for actions. So maybe hypocrisy is a bit of an misleading term here. I'd say that there are many agents th

Let's use the example of the Much Better Life Simulator from the post of a similar name, which is less repellent than a case of pure orgasmium. My objections to it are these:

1: Involves memory loss. (Trivially fixable without changing the basic thought experiment; it was originally introduced to avoid marring the pleasure but I think I'm wired strangely with regard to information's effect on my mood.)

2: Machine does not allow interaction with other real people. (Less-trivially fixable, but still very fixable. Networked MBLSes would do the trick, and/or ones with input devices to let outsiders communicate with folks who were in them.

If these objections were repaired and there were no "gotcha" side effects I haven't thought of, I would enter an MBLS with only negligible misgivings, which are not endorsed and would be well within my ability to dismiss.

Let's consider another case: suppose my neurochemistry were altered so I just had a really high happiness set point, and under ordinary circumstances was generally pleased as punch (but had comparable emotional range to what I have now, and reacted in isomorphic ways to events, so I could dip low when unpleasant things happ... (read more)

4nazgulnarsil13y
WoW already qualifies as that sort of MBLS for some subset of the world.
0Alicorn13y
I tried WoW - weekend free trial. Didn't see what the fuss was about.
1nazgulnarsil13y
that's because your life is better than WoW.
5Alicorn13y
I'm rarely attacked by horrifying monsters, that's one thing. I also have less of a tendency to die than my character demonstrated.
2[anonymous]13y
How could you tell the difference? Let's say I claim to have build a MBLS that doesn't contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won't rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won't arise and that you will have plenty of fun. What would be missing? Like in my response to Yasuo, I find it really weird to distinguish states that have no different experiences, that feel exactly the same. Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I'd assume that this isn't an easy question to answer and I'm not calling you out on it, but "I want to be able to feel something bad" sounds positively deranged. (I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.) Yes, I would imagine orgasmium to essentially have no memory or only insofar as it's necessary for survival and normal operations. Why does that matter? You already have a very unreliable and sparse memory. You wouldn't lose anything great in orgasmium; it would always be present. I can only think of the intuition "the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone". Orgasmium is always amazing. But then, that can't be exactly right, as you say you'd be more at ease to have memory you simply never use. I can't understand this. If you don't use it, how can it possibly affect your well-being, at any point? How can you value something that doesn't have a causal connection to you? How do you know that? I'm not trying to play the postmodernism card "How do we know anything?", I'm genuinely curious how you arrived at this conclusion. If I try to answer the question "Do I care about losi
6Alicorn13y
I'd probably test such a thing for an hour, actually, and for all I know it would be so overwhelmingly awesome that I would choose to stay, but I expect that assuming my preferences and memories remained intact, I would rather be out among real people. My desire to be among real people is related to but not dependent on my tendency towards loneliness, and guilt hadn't even occurred to me (I suppose I'd think I was being a bit of a jerk if I abandoned everybody without saying goodbye, but presumably I could explain what I was doing first?) I want to interact with, say, my sister, not just with an algorithm that pretends to be her and elicits similar feelings without actually having my sister on the other end. In a sense, emotions can be accurate sort of like beliefs can. I would react similarly badly to the idea of having pleasant, inaccurate beliefs. It would be mistaken (given my preferences about the world) to feel equally happy when someone I care about has died (or something else bad) as when someone I care about gets married (or something else good). Lying is wrong. I know. It is one of the many terrible things about reality. I hate it. Memories are a way to access reality-tracking information. As I said, remembering stuff is not consistently pleasant, but that's not what it's about. Counterfactually. Well, I wrote everything above that in my comment, and then noticed that there was this pattern, and didn't immediately come up with a counterexample to it. I think it's fine if you want to wirehead. I do not advocate interfering with your interest in doing so. But I still don't want it.
1PeterDonis10y
Apologies for coming to the discussion very, very late, but I just ran across this. How could you possibly get into this epistemic state? That is, how could you possibly be so sure of the sustainability of your maximally happy state, without any intervention from you, that you would be willing to give up all your optimization power? (This isn't the only reason why I personally would not choose wireheading, but other reasons have already been well discussed in this thread and I haven't seen anyone else zero in on this particular point.)

Why does my intuition reject wireheading? Well, I think it has something to do with the promotion of instrumental values to terminal values.

Some pleasures I value for themselves (terminal) - the taste of good food, for example. As it happens, I agree with you that there is no true justification for rejecting wireheading for these kinds of pleasures. The semblance of pleasure is pleasure.

But some things are valued because they provide me with the capability, ability, or power (instrumental) to do what I want to do, including experiencing those terminal pleasures. Examples are money, knowledge, physical beauty, athletic abilities, and interpersonal skills.

Evolution has programmed me to derive pleasure from the acquisition and maintenance of these instruments of power. So, a really thorough wireheading installation would make me delight in my knowledge, beauty, charisma, and athleticism - even if I don't actually possess those attributes. And therein lies the problem. The semblance of power is not power.

What I am saying is that my intuitive rejection of wireheading arises because at least some of the pleasures that it delivers are a lie, a delusion. And I'm pretty sure that ... (read more)

0[anonymous]13y
But you just said you value those things instrumentally, so you can get pleasurable sensations. Raw power itself doesn't do anything for you, just sitting there. I can see how, when considering being wireheaded, you would come to reject it based on that. Essentially, you'd see (e.g.) wireheaded power as not actually instrumentally useful, so you reject the offer. It sounds like snake-oil. But isn't that a false conclusion? It might feel like it, but you won't actually feel any worse off when you're completely wireheaded. Fake capabilities are a problem when interacting with agents who might exploit you, so the heuristic is certainly useful, but it fails in the case of wireheading that actually delivers on its promises. You won't need knowledge and power and so on when you're in wirehead heaven, so wireheading can simply ignore or fake them. (Disclaimer: muflax does not advocate giving up your autonomy to Omegas claiming to provide wirehead heaven. Said Omegas might, in fact, be lying. Caution is advised.) It does.

Because writing big numbers on the speedometer with a sharpie doesn't get me to the destination sooner.

I think the question is: why do you really need to get there?

0[anonymous]13y
Exactly that.

What is this "valuing"? How do you know that something is a "value", terminal or not?

Are you looking for a definition? Specifically coming up with a dictionary definition for the word "value" doesn't seem like it would be very instrumental to this discussion. But really, I think just about everyone has a pretty good sense for what we're talking about when we post the symbols "v", "a", "l", "u", and "e" on less wrong for us to simply discuss the concept of value without tryin... (read more)

2[anonymous]13y
No, I'm trying to understand the process others use to make their claims about what they value (besides direct experiences). I can't reproduce it, so it feels like they are confabulating, but I don't assume that's the most likely answer here. That seems horribly broken. There are tons of biases that make asking such questions essentially meaningless. Looking at anticipated and real rewards and punishments can easily be done and fits into simple models that actually predict people's behaviors. Asking complex question leads to stuff like the Trolley problem which is notoriously unreliable and useless with regards to figuring out why we prefer some options to others. It seems to me that assuming complex values requires cognitive algorithms that are much more expensive than anything evolution might build and don't easily fit actually revealed preferences. Their only strength seems to be that they would match some thoughts that come up while contemplating decisions (and not even non-contradictory ones). Isn't that privileging a very complex hypothesis?

Because wireheading is death.

Beyond the definitions, a person walks into a room, something happens, they never walk out again, nor is the outside world impacted, nor is anything changed by them. They might as well walk into the wireheading room and have their brains dashed upon the floor. Their body may be breathing, but they are dead just the same.

If the wireheading were un-doable, then it would be nothing more than suspended animation. Pleasurable, but it's still a machine you plug into then do nothing until you unplug. Frankly, I haven't the years... (read more)

4DanArmak13y
Question reversal: suppose Omega reveals to you that your life has been a simulation. Your actions inside the simulation don't affect the outside, 'real' world - nobody is watching you. However, Omega offers to remove you from the simulation and instantiate you in the real world outside. Unfortunately, Omega predicts that your future life on the outside won't be nearly as fun as the one you've had in the simulation up until now. The difference in satisfaction - including satisfying your preferences that apply to "affecting the 'real' world" - may be as great as the possible improvement due to wireheading... Would you accept the offer and risk a life of extreme misery to improve your chance of affecting the "real" world? Would you consider yourself "dead" if you knew you were being simulated? (Apologies for replying late.)
4Xachariah13y
I would accept Omega's offer to 'pop' me up a level. I would accept even if it meant misery and pain. I would always accept this offer. Actually, bar that. I would accept the offer conditional on the fact that I'd be able to impact the 'real' world more outside the simulation than inside. I'd be comfortable staying in my current level if it was providing some useful effect in the higher levels of reality that I couldn't provide if I were 'popped' out. Upon learning I was in a simulation, I would make it my life's sole purpose to escape. I think this would be a common reaction. It is my understanding that Buddhism believes this world is a simulation and the goal of each Buddhist is to 'pop' themselves unto a higher plane of reality. Many branches of Christianity also put strong emphasis on proving one's worth on Earth solely to be in as good a position as possible once we die and 'pop' into the 'real' world in the Kingdom of Heaven. Exploring your question more, I realize that there are at least two situations this wouldn't work in. The first situation would be if reality consisted of a circularly linked list of 'real' worlds, and 'popping' up or 'pushing' down enough times would bring you back to the same world you started at. The second situation would be if there were infinitely many layers to 'pop' up through. I'm actually not sure what I would do if reality were in such an impossible configuration.
1Plasmon13y
Why do you think infinitely many layers would be an impossible configuration? If anyone, anywhere has an actual real turing machine (as opposed to a finite approximation of a turing machine), creating such a configuration is basically child's play. Have you read The Finale of the Ultimate Meta Mega Crossover which explores just this possibility ?
4[anonymous]13y
Wireheads are still experiencing the pleasure. They are not in suspended animation, stuff is still happening in their brains. They don't affect the outside world anymore (beyond ensuring their survival), but so what? The fact that it is superficially similar to death does not bother me at all. If no more optimization is needed, why bother with optimizing? You're essentially just restating the basic intuition against wireheading, just more emphatically. I find it just as incomprehensible. (For completeness, I don't share your aversion to death at all. I'm totally indifferent to it. I essentially agree with teageegeepea here. Maybe this influences the intuition.)
2Xachariah13y
I do not mean that Wireheading is metaphorical death. It is not just an emotionally charged statement that means I am really against Wireheading. I mean that Wireheading is literally death. The cluster of death-space consists of more than just stopping breathing. I am arguing that the important boundary in the definition-space of death is not 'stopped breathing' but 'inability to affect the outside world'. Imagine the following Omega enabled events, rest assured that none of them are reversible once Omega stops toying with you and finishes this experiment. Ask yourself if you consider the following states death: * 1 -Omega transforms your body into a corpse! You cannot move or do anything a corpse cannot do. * 2-Omega transforms your body into a corpse, but lets you keep moving and taking actions. You return back to work on monday, and thankfully there's no extra smell. * 3-Omega teleports you to a dimension of nothingness, and you're stuck there for all eternity. * 4-Omega teleports you to a dimension full of nothingness, then brings you back out a year later. * 5-Omega turns you into a tree. You're not able to do anything a tree cannot do, like think, move, or anything of the sort. * 6-Omega turns you into a tree, but gives you the power to move and think and talk in rhymes. * 7-Omega keeps your body the same, but severs your ability to do anything including moving your eyes or blinking. Luckily your autonomic system keeps you breathing and someone puts you a nutrient drip before that 'not eating' thing catches up to you. * 8-Omega keeps your body the same, but separates your ability to do anything into a separate non-corporeal facility. IE, you can move things with your mind. * 9-Omega replaces your body with a corpse doll and shifts you into a parallel plane where you can view the world but not interact. * 10-Omega replaces your body with a corpse doll and shifts you into a parallel plane where you can both view and interact with the world.
0[anonymous]13y
(Thanks for the clarification, that makes your comment much clearer.) How would 2) work? What do you mean, my body becomes a corpse, but goes to work? As a corpse, I won't have blood circulation for example, so how could I walk? Unless Omega magically turns me into an actual zombie, but what's the use of thinking about magic? Similarly, 6) ain't a tree, but at best a brain stuck in a tree. Does 3) include myself as separate from the nothingness? So I'm essentially "floating" in nothingness, kinda like a Boltzmann brain? 8) isn't possible in principle. There are no separate mental events, unless Omega can change metaphysics, but that's uninteresting. I'd consider 3), 4), 7), 9) and 10) totally alive, assuming mental processing is still happening, stuff is still getting experienced, it's just that any outgoing signals to influence the world are getting ignored. If this isn't happening (e.g. I'm in a deep coma), then I'm straight-up dead. As long as I have subjective experiences, I'm alive. Overall though, arguing about the definition of "death" isn't gonna be useful.
-1Xachariah13y
(Omega was supplied so that magical scenarios would be possible for the thought experiment.) My definition vs your definition of death is very enlightening in light of our differences on wireheading. You view being alive as being able to think, to receive input and experience. I view being alive as being able to act, to change and shape the world. This division cuts through the experience of wireheading; it is the state of thinking without the ability to act. Life to you; death to me. I would venture a guess that anyone who is pro-wireheading would hold your view of life/death while anyone who is anti-wireheading would hold my view of life/death. You wanted to know why all those other arguments sounded good to everybody, but not to you. We have incompatible priors. There is no sufficiently convincing argument that can cross the gulf between life and death. I do not have sufficient rationalist superpowers to try and change your priors (or even make you want to change them, as I wouldn't want to change mine). But if you wish to understand what other people are thinking as they reject Wireheading, simply close your eyes and try and imagine the choice you would make if you instead believed your time of death were the instant you never acted upon the world again. They are not being convinced by insufficient arguments. They are merely starting from a different metaphysical position than you.
4[anonymous]13y
That doesn't dissolve the problem completely for me, it just moves the confusion from "Why do humans disagree on wireheading?" to "Why do humans have different views on what constitutes death?". Is it just something you memetically pick up and that then dominates your values? I'd rather assume that the (hypothetical) value difference comes first and we then use this to classify what counts as "dead". "yup, can still get pleasure there, I must be alive" vs. "nope, can't affect the external world, I must be dead".
0Xachariah13y
That is a very interesting question. I'm sure I feel quite as puzzled looking at you from this side as you do looking at me from that side. I would also assume that there is some other first factor. Sadly, it would be a bit outside of the depth of my understanding of metaphysics (and the scope of this page) to try and discover what it is. Still, I am intrigued about it and will keep thinking on the subject.
1nazgulnarsil13y
unpack "the world" and you'll maybe sympathize with wireheaders more.
1[anonymous]13y
This perspective does explain why I would be much less worried about wireheading if I was older than I am right now and had already reproduced. If I had kids who were off having their own kids, I could think "Ah good, my DNA is off replicating itself and at this point, and whether or not I die is unlikely to change that. In fact, the best way to help them out would probably be to make sure I don't spend too much of the money they might theoretically inherit, so if wireheading was cheaper than a world yacht tour, my kids and grandkids might even benefit from me deciding to wirehead. That being said, I say this as someone who hasn't even experienced a world yacht tour. I mean, now that I'm a working adult, I can barely manage to acquire much more then about 10 consecutive days of not working, which gives one just barely enough time to scratch through the surface of your current hedonism and encounter boredom with choices (The last time I was bored and had a choice of activity, it felt refreshing because of how RARELY I'm bored and have choices, as opposed to being bored because you are stuck in your current activity with no control) Before deciding to wirehead, it seems like it might be well worth while to at the very least take some time to experience being retired to make sure I have a good feel for what it is that I'm giving up. But I also realize at this point I also feel like it's a bit presumptuous of me to say what I would want at 70 or so, at 27. I've experienced too many changes in philosophy in the last 10 years to feel assured that my current set of desires are stable enough to suggest something that far in the future. I mean, it doesn't feel likely they'll change, but it didn't feel likely they would change before either, and yet they did.
1jhuffman13y
So do you think these reasons for maybe wanting to wirehead at 70 would be good enough reasons to kill yourself? Because if you are accepting Xachariah's response then it seems like that is the standard you'd have to meet.
2[anonymous]13y
Yes, there are definitely a set of circumstances where I could see myself willing to essentially suicide when I'm significantly older. I mean, when you're old, cheap wireheading seems to be equivalent to being given a choice between: 1: Die pleasantly and painlessly after a grand farewell party, allowing your family to have a good inheritance and ascend to the technological equivalent of heaven. 2: Die in a hospital bed after horrible mind crushing suffering where you are incoherent, draining away money and resources for your family, and then nothing. If you're going to die anyway (and I am assuming Immortal life is not on the table. If it is, then the entire scenario is substantially different), option 1 sure sounds a lot better. And yes, there are also a large number of circumstances where I can see myself not wireheading as well. Maybe my Grandfatherly advice will prove absolutely crucial to my grandchildren, who think that my great grandchildren just won't be the same without getting to meet me in person. It's entirely possible that everyone around me will still need me even when I'm 70, or still when I'm 80, or even when I'm 90. (With medical technology improving, maybe 90 will be the new 70?) That's why I mentioned I'd want to get a feel for retired life before deciding to wirehead. I don't really know what it's going to be like being a retired person for me. For that matter, the entire concept of retirement may not even be around by the time I'm 70. It's not just my own philosophy that can change in 43 years. Our entire economic system might be different. And I also had the implicit assumption of cheap wireheading, but it may turn out that wireheading would be horribly expensive. That's an entirely different set of calculations.
0DanArmak13y
The scenario stipulates your wireheading experience will be the best one possible. If you really enjoy yacht tours, you'll experience simulated yacht tours. You're not giving anything up in terms of experience.
0[anonymous]13y
That's a good point, and It made me think about this again, but my understanding is that I think I must be giving up SOME possible experience. Wouldn't it break the laws of physics for a finitely sized wireheading world to contain more possible states to experience than the universe which contains the wireheading world and also contains other things? Now, for yacht tours, I don't think this matters. Yacht tours don't require that kind of complexity. Actually, I'm not even sure how this kind of complexity would be expressed or if it's something I could notice even if I was a theoretical physicist with trillions of dollars of equipment. But after rethinking this, I think this complexity represents some type of experience and I don't want to rush into trading it away before I understand it unless I feel like I have to, so I still feel like I may want to wait on wireheading. I suppose an alternate way of looking at it might be that I have a box of mystery, which might contain the empty vastness of space or some other concept beyond my understanding, and if I trade it, I will never be able to access it again, but in exchange I get offered the best possible experience of everything that ISN'T in the box, many of which I already know. There is a distinct possiblity I'm just being irrationally afraid of rushing into making permanent irreversible decisions. I've had that type of fear for decisions which are much more minor than wireheading, and it might be coming up again. That being said, being unsure of this point represents a contradiction to something that I had thought earlier. So I'm definitely being inconsistent about something and I appreciate you pointing it out. I'll try to break it down and see if I can determine which point I need to discard.

How familiar are you with expected utility maximizers? Do you know about the difference between motivation and reward (or "wanting" and "liking") in the brain?

We can model "wanting" as a motivational thing - that is, if there was an agent that knew itself perfectly (unlike humans), it could predict in advance what it would do, and this prediction would be what it wanted to do. If we model humans as similar to this self-knowing agent, then "wanting" is basically "what we would do in a hypothetical situation.&qu... (read more)

1[anonymous]13y
I think I'm familiar with that and understand the difference. I don't see it's relevance. Assuming "wanting" is basically the dopamine version of "liking" seems more plausible and strictly simpler than assuming there's a really complex hypothetical calculation based on states of the world being performed. Also, I suspect you are understanding wireheading as too narrow here. It's not just the pleasure center (or even just some part of it, like in "inducing permanent orgasms"), but it would take care of all desirable sensations, including the sensation of having one's wants fulfilled. The intuition "I get wireheaded and still feel like I want something else" is false, which is why I used "rewards" instead of "pleasure". (And it doesn't require rewiring one's preferences.) Confabulation and really bad introspective access seem much more plausible to me. If you modify details in thought experiments that shouldn't affect wireheading results (like reversing Nozick's experience machine), people do actually change their answers, even though they previously claimed to have based their decisions on criteria that clearly can't have mattered. I'd much rather side with revealed preferences, which show that plenty of people are interested in crude wireheading (heroin, WoW and FarmVille come to mind) and the better those options get, the more people choose them.
1Manfred13y
Why assume? It's there in the brain. It's okay to model reality with simpler stuff sometimes, but to look at reality and say "not simple enough" is bad. The model that says "it would be rewarding, therefore I must want it" is too simple. Except the brain is a computer that processes data from sensory organs and outputs commands - it's not like we're assuming this from nothing, it's an experimental result. I'm including all sorts of thing in "the world" here (maybe more than you intended), but that's as it should be. And ever since mastering the art of peek-a-boo, I've had this concept of a real world, and I (i.e. me, my brain) use it in computation all the time. This is part of why I referenced expected utility maximizers. Expected utility maximizers don't choose what just makes them feel like they've done something. They evaluate the possibilities with their current utility function. The goal (for an agent who does this) truly isn't to make the utility meter read a big number, but to do things that would make their current utility function read a big number. An expected utility maximizer leading a worthwhile life will always turn down the offer to be overwritten with orgasmium (as long as one of their goals isn't something internal like "get overwritten with ogasmium"). And plenty of people aren't, or will play tetris but won't do heroin. And of course there are people who will lay down their lives for another - to call wireheading a revealed preference of humans is flat wrong.
0[anonymous]13y
Correct and I don't disagree with this. An actual expected utility maximizer (or an approximation of one) would have no interest in wireheading. Why do you think humans are best understood as such utility maximizers? If we were, shouldn't everyone have an aversion, or rather, indifference to wireheading? After all, if you offered an expected paperclip maximizer the option of wireheading, it would simply reject it as if you had offered to build a bunch of staples. It would have no strong reaction either way. That isn't what's happening with humans. I'm trying to think of a realistic complex utility function that would predict such behavior, but can't think of anything. True, there isn't anything like a universally compelling wirehead option available. Each option is, so far, preferred only by minorities, although in total, they are still fairly widespread and their market share is rising. I did express this to sloppily.
0Manfred13y
Yeah, true. For humans, pleasure is at least a consideration. I guess I see it as part of our brain structure used in learning, a part that has acquired its own purpose because we're adaptation-executers, not fitness maximizers. But then, so is liking science, so it's not like I'm dismissing it. If I had a utility function, pleasure would definitely be in there. So how do you like something without having it be all-consuming? First, care about other things too - I have terms in my hypothetical utility function that refer to external reality. Second, have there be a maximum possible effect - either because there is a maximum amount of reward we can feel, or because what registers in the brain as "reward" quickly decreases in value as you get more of it. Third, have the other stuff you care about outweigh just pursuing the one term to its maximum. I actually wrote a comment about this recently, which is an interesting coincidence :D I've become more and more convinced that a bounded utility function is most human-like. The question is then whether the maximum possible utility from internal reward outweighs everyday values of everything else or not.
0[anonymous]13y
I agree with you on the bounded utility function. I still need to think more about whether expected utility maximizers are a good human model. My main problem is that I can't see realistic implementations in the brain (and pathways for evolution to get them there). I'll focus my study more on that; I think I dismissed them too easily.

Apparently, most of us here are not interested in wireheading. The short version of mulfax's question is: Are we wrong?

My answer is simple: No, I am not wrong, thanks for asking. But let me try to rephrase the question in a way that makes it more relevant for me:

Would we change our mind about wireheading after we fully integrated all the relevant information about neuroscience, psychology, morality, and the possible courses of action for humanity? Or to paraphrase Eliezer, would we choose wireheading if we knew more, thought faster, were more the people we... (read more)

-1[anonymous]13y
To clarify, I'm not interested in convincing you, I'm interested in understanding you. * Hey, humans are reward-based. Isn't wireheading a cool optimization? * Nope. * That's it? * That's it. * But reinforcement. It's neat and elegant! And some people are already doing crude versions of it. And survival doesn't have to be an issue. Or exploitation. * Still nope. * Do you have any idea what causes your rejection? How the intuition comes about? Do you have a plausible alternative model? * No. * O... kay? I know that "let me give you a coredump of my complete decision algorithm so you can look through it and figure it out" isn't an option, but "nope" doesn't really help me. Good point about CEV, though.
2knb13y
You aren't getting a "nope" muflax. This is where you're wrong. Reward is just part of the story. Humans have complex values, which you seem to be willfully ignoring, but that is what everyone keeps telling you.

To clarify, are you claiming that wireheading is actually a good thing for everyone, and we're just confused? Or do you merely think wireheading feels like a fine idea for you but others may have different values? At times, your post feels like the former, but that seems too bizarre to be true.

My own view is that people probably have different values on this issue. Just as suicide seems like a good idea to some people, while most people are horrified by the idea of committing suicide, we can genuinely disagree and have different values and make different decisions based on our life circumstances.

0[anonymous]13y
As I said here, it really weirds me out if it weren't a universally good or bad idea. As such, it should be good for everyone or no-one. Wireheading doesn't seem like something agents as similar as humans should be able to agree to disagree on.
3knb13y
Suicide is even more basic than wireheading, yet humans disagree about whether or not to commit suicide. There are even some philosophers who have thought about it and concluded suicide is the "rational" decision. If humans cannot, in fact agree about whether to exist or not, how can you think wireheading has a "right" answer?
1[anonymous]13y
Humans do also still disagree on p-zombies or, more basic, evolution. That doesn't mean there isn't a correct answer. But you're right that pretty much any value claim is disputed and when taking into account past societies, there aren't even obvious majority views on anything. Still, I'm not comfortable just giving up. "People just are that different" is a last resort, not the default position to take in value disputes.
0knb13y
The distinction is that evolution and zombies are factual disputes. Factual views can be objectively wrong, preferences are purely subjective. There is no particular reason any one mind in the space of possible minds should prefer wireheading.
0[anonymous]13y
To clarify, the claim is not "all agents should prefer wireheading" or "humans should have wireheading-compatible values", but "if an agent has this set of values and this decision algorithm, then it should wirehead", with humans being such an agent. The wireheading argument does not propose that humans change their values, but that wireheading actually is a good fulfillment of their existent values (despite seeming objections). That's as much a factual claim as evolution. The reason I don't easily expect rational disagreement is that I expect a) all humans to have the same decision algorithm and b) terminal values are simple and essentially hard-coded. b) might be false, but then I don't see a realistic mechanism how they got there in the first place. What's the evolutionary advantage of an agent that has highly volatile terminal values and can easily be hijacked, or relies on fairly advanced circuitry to even do value calculations?
4Wei Dai13y
Humans seem to act as general meme hosts. It seems fairly easy for a human to be hijacked by a meme in a way that decreases their genetic inclusive fitness. Presumably this kind of design at least had an evolutionary advantage, in our EEA, or we wouldn't be this way. If you can host arbitrary memes, then "external referent consequentialism" doesn't really need any extra circuitry. You just have to be convinced that it's something you ought to do.
0prase13y
By the way, by a universally good idea you mean a) an idea about which any person can be persuaded by a rational argument, or b) objectively morally good idea, or c) something else? Because if a), it is very unlikely to be so. There are people who don't accept logic.
1[anonymous]13y
I mean an idea that, if properly understood, every human would agree with, so a). Well, there are some humans that, for various reasons, might not be able to actually follow the reasoning or who are so broken to reject correct arguments and whatnot. So "every" is certainly exaggerated, but you get the idea. I would not expect rational disagreement.

Can someone give a reason why wireheading would be bad?

Well, we don't want our machines to wirehead themselves. If they do, they are less likely to be interested in doing what we tell them to - which would mean that they are then less use to us.

2[anonymous]13y
Sure, but what about us? As designers, we have good reasons to find a way around wireheading (and somewhat less seriously and metaphorically, Azathoth has good reasons to prevent us from wireheading). So making wireheading-proof agents is important, I agree, but that doesn't apply to ourselves.
2byrnema13y
The connection with us could be that (to the extent we can) we choose what we want as though we were machines at our disposal. ... There is a component that wants doughnuts for breakfast, but actually "I" want eggs for breakfast since I'd rather be healthy ... and the machine that is me obstensibly makes eggs. The hedeonistic component of our brain that wants wire-heading is probably/apparently repressed when it comes down to conflicts with real external goals.

It is possible to enjoy doing something while wanting to stop or vice versa, do something without enjoying it while wanting to continue. (Seriously? I can't remember ever doing either.

 

You should try nicotine-addiction to understand this. That's possible, because "reward" and "pleasure" are different circuits in the brain.

John Wesley said, "earn all you can; save all you can; give all you can." He was serious.

What does that have to do with wireheading? As far as I can tell, that quote resonates with me on a deep level, though I replace "give" with something like "optimize" or "control." And so when presented a choice between pleasure and control, I choose control. (If actually presented the choice, I calculate the tradeoff and decide if the control is worth it.) So, even though orgasmium!Vaniver would be more satisfied with itself, cu... (read more)

I know people who specifically said that if orgasmium were available, they'd take it in an instant. I also know people that would not. Wireheading doesn't have to be univerally, objectively "good" or "bad." If wireheading would satisfy all your values, and it becomes available to you, well, go for it.

I know that if I was given access to orgasmium, I'd probably be content living on it for the rest of my life. That doesn't change the fact that BEFORE having access to orgasmium, I simply prefer not to accept it, and instead create art and ... (read more)

You seem to classify each argument against wireheading as a bias: since the argument doesn't persuade you, the ones who are persuaded must make some error in judgement. But those arguments aren't (all) meant to make people value "reality" more than pleasure. Most of them aim at people who already do prefer real achievements over pleasure (whatever it means) but are confused about the possibility of wireheading. In particular,

  1. Isn't an argument against wireheading per se, but against some sorts of wireheading which stimulate the reward mechanisms
... (read more)
3[anonymous]13y
I did not intend this. I simply find them all very unconvincing and (briefly) gave my reasons why. I assume that at least some of them rely on hidden assumptions I don't see and only look like an error to me. I don't have an opinion on wireheading either way (I'm deliberately suspending any judgment), but I can only see good arguments for it, but none against it. If that were really the case, I would expect many more experienced rationalists to be convinced of it (and I highly respect the opinions of pretty much everyone I linked to), so I'm operating on the assumption of an inferential gap. I don't think that's cynical and I do find it very plausible. Explaining akrasia (which I do have) in terms of being mistaken what I like and having a (often unconscious) conflict between different parts of the brain works just fine for me. The moment I realize I'm not actually enjoying what I do, I either stop immediately or find that I'm fulfilling some other emotional demand, typically avoiding guilt or embarrassment. No, just no, especially if they give different results based on minor modifications, like with Nozick's experience machine. (Or look at the reactions to Eliezer's Three Worlds Collide and various failed utopias.) I'd rather have no opinion than base it on a complex intuition pump. Your comments on 1) and 8) I agree with. The other points I addressed in other comments here, I think.
1prase13y
I don't think there is an inferential gap of the usual type (i.e. implicit hidden knowledge of facts or arguments). It's more probably a value disagreement, made harder by your objection to well-definedness of "value". Agreed about the unconscious conflict, but not about the conclusion. A real akrasic wants to do two incompatible things X and Y, chooses X and later regrets the choice. He knows that he will regret the choice in advance, is full aware of the problem, and still chooses X. An akrasic "enjoys" X (at the moment), but is genuinely unhappy about it later - and if he realises the problem, the unhappiness emerges already during X so that X is no longer enjoyable, but still it is hard to switch to Y. It is a real and serious problem. The cynical (no moral judgement intended) explanation of akrasia basically tells that the agent really "prefers" X over Y, but for some reason (which usually involves hypocrisy) is mistaken about his preference. But, if it is true, tell me: why do akrasics try to fight akrasia, often privately? Why they insist that they want Y, not X, even if there are no negative consequences for admitting the desire for X? Why they are happy after doing Y and unhappy after doing X, and often remember being more happy doing Y than doing X? Of course, you can redefine the words "want" and "prefer" to mean "what you actually do", for the price of people being mistaken about significant part of what they want. But then, these words become useless, and we lose words denoting the stuff people people report to "want" (in the conventional meaning). Failed utopias regularly fail mainly because people can't envisage all consequences of a drastic change of social order, which is caused by complexity of human societies. Being mistaken about what we want is a part of it, but not the most important one. Early communists weren't surprised that they didn't like party purges and mass executions that much. They were surprised that these things happened. Diff
0[anonymous]13y
You make a good point about private akrasia conflicts. I'll have to think more about this. It doesn't make sense either way right now. The reason I object to major preference differences among humans is that this breaks with the psychological unity of humanity. It's not just that there are some minor variations or memetic hijackings in the utility function, but it seems like some are maximizing rewards, while others maximize expected utility. That's a really big difference, so it makes more sense to find an explanation that assumes only one mechanism and explains the respective "unusual" behavior in terms of it. If we're expected utility maximizers, why are some attracted to wireheading? In terms of reinforcement and things like operant conditioning, raking up "superstitions" and tons of instrumental goals makes sense. Highly splintered and hugely divergent terminal value, however, seems weird to me. Even weird for Azathoth's standards. About failed utopias, you misunderstood me. I meant Eliezer's scenarios of failed utopias, like this one.
2prase13y
Fair enough. Utility is so general a term that it can encompass rewards. It can be said that all people are maximising utility whenever their decisions don't exhibit cyclic preferences or some other blatant (but nevertheless common) error, but this would also be a bit misleading - recalling the von Neumann-Morgenstern theorem usually begs for the cynic interpretation of utility that does care more about what people do rather than what they really want. It's probably better to say that there are at least two distinct decision processes or systems working together in the brain, and, depending on circumstances, one of them prevails. The unconscious process steers the decision towards safe immediate psychological rewards; the conscious one plans further in advance and tries to accomplish more complex aims related to the external world. (Generalisation to the case of more than two processes working on several different time scales should be straightforward.) Sometimes - in stress, during akrasic behaviour, presumably also under wireheading, the unconscious system overrides the conscious one and executes its commands. In other situations the conscious system can take priority. The conscious system wants to remain in control, but knows that it can be overriden. Therefore it tries to avoid situations where that can happen. Now into the more speculative realm. I would guess that retaining at least some control should be strongly prioritised over any amount of pleasure on the level of the conscious system, and that this may even be a human universal. But the conscious mind can be fooled into thinking that the control will not be lost in spite of a real danger. For example, the drug addicts overwhelmingly report that they can always stop - when they finally realise that it is not the case, the relevant part of their behaviour is already firmly controlled by the unconscious mind. The rejection of wireheading may be the manifestation of the desire of the conscious mind to r
0[anonymous]13y
(I'm not fully convinced of the conscious/unconscious split you outline, but let's go with it for the sake of the argument. It's certainly a reasonable hypothesis.) Why would you side with the conscious mind? Do you have a specific reason for this, besides "because it's the one that holds the power" (which is perfectly acceptable, just not what I'd do in this case)? As a data point, I personally reject it. Regardless of whether wireheading is actually a good idea, I don't care about staying in control. I also don't see my conscious mind as being particularly involved in decision making or value considerations (except as a guiding force on an instrumental level) and I see no reason to change that. I'm generalizing fairly sloppily now, but I'd expect this to be a fairly widespread Buddhist attitude, for example (and that's also my background, though I wouldn't identify with it anymore). My most obvious objection to wireheading was, "it might be awesome, but I might miss something and end up in a local maximum instead of a global one", not "it's gonna enslave me". I'm perfectly aware that, if wireheaded, I'd have little conscious control left, if any. That does not bother me in the least. Caring that much about control is a perspective I did not anticipate and it does help explain the problem. Point taken. I thought wireheading was a simple, easy-to-understand and realistic scenario. That doesn't seem to be the case at all. Taken as a more complicated thought experiment, the rejection and varying intuitions do make sense. This gets even clearer when I look at this framing: That's pretty much the opposite way of how I'd describe it, even though it's factually totally fine. The metaphor that I was thinking of the first time I saw wireheading described was liberation and freedom from suffering, not dictatorship! Also, when evaluating it, I was falling back on "wireheady" experiences I already had, like states of high absorption or equanimity in meditation, use of
0prase13y
I am not siding with it, I am it. When it holds the power, there is nothing besides it to communicate with you in this dialog. Good point. The choice of words unconscious/conscious was probably not the best one. Not all parts of the latter process feel conscious, and the former can be involved in conscious activities, e.g. use of language. I should have rather said short term or long term, or have stuck with the standard near/far, although I am not sure whether the meanings precisely overlap. Buddhism, experiences with drugs, meditations: That may be the core reason for disagreement. Not only experiences can change preferences - inferential gap of sorts, but not one likely to be overcome by rational argument - but reactions to specific experiences differ. Some people hate certain drugs after the first use, others love them. Buddhism, as far as I know, is certainly a powerful philosophy whose values and practices (meditation, introspection, nirvana) are more compatible with wireheading than most of the western tradition. It is also very alien to me.

My objection is precisely the opposite of what some have said here.

You are affecting the outside world, but only negatively - using resources that could be put to more important things than one person blissing out, without creating anything for anyone else. I therefore see wireheading just you as an unethical choice on your part.

I am not sure if I have an objection to "wirehead everyone sustainably forever", if that were ever practical.

Edited to clarify very slighty:

I do have some revulsion at the thought but I have no idea what it would be grounded in, if anything.

I believe that humans have natural psychological defenses against the lure of wireheading, because the appeal is something we navigate on a daily basis in our every day lives. In my case, I know I would really enjoy entertaining myself all the time (watching movies, eating good food, reading books) but eventually I would run out of money or feel guilty I'm not accomplishing anything.

Even if you tell people there will be no long-term consequences to wire-heading, they don't believe you. It's a matter of good character, actually, to be resistant to wanting t... (read more)

1[anonymous]13y
If it's healthy to not be a psychopath, on what values do you base that? I think you're sneaking in a value judgment here that, if valid, would rule out wireheading. (It might be evolutionary successful to not be a (full) psychopath, but that's a very different matter.) I do find your overall thought process in your first few paragraphs plausible, but "anyone who disagrees with me is just not admitting that I'm right" sounds way too much like the kind of toxic reasoning I'm trying to avoid, so I'm fairly skeptical of it.
2byrnema13y
Just in case, I don't argue that people who say they don't want to wirehead are wrong about that. I think it's ultimately inconsistent with a full appreciation that values are not externally validated. I think this full appreciation is prevent by biological stop-guards. * Equivalence of Wire-Heading and Modifying Values As Giving Up On External Satisfaction of our Current Values Something I think about in relation to wireheading, so close together in my brain that when talking about one I find myself conflating with the other, is that it should follow that if values aren't externally validated, it should be equivalent to 'make the world better' by (a) changing the world to fit our values or by (b) changing our values to fit the world. We have a strong preference for the former, but we could modify this preference so (b) would seem just as good a solution. So by modifying their value about solutions (a) and (b), a person in theory could then self-modify to be perfectly happy with the universe as it is. This is equivalent to wireheading, because in both cases you have a perfectly happy person without altering the universe outside their brain. * What I think people don't admit. I think what 'anyone who disagrees with you is not admitting' is that the universe in which your values are altered to match reality (or in which a person chooses to wirehead) is just as good as any other universe. * Well, maybe they do admit it, but then their arational preference for their current values is not unlike a preference for wireheading. The goodness of the universe is subjective, and for any subjective observer, the universe is good if it satisfies their preferences. Thus, a universe in which our values are modified to match the universe is just as good as our values modified to match the universe. I think that is clear. However, people who don't want to wirehead compare the universe (b) (one in which their values are modified but the universe is not) with the universe the
0[anonymous]13y
Yup, full agreement. That's exactly how it appears to me, though I'm not confident this is correct. It seems like others should've thought of the same thing, but then they shouldn't disagree, which they do. So either this is far less convincing than I think (maybe these safeguards don't work in my case) or it's wrong. Dunno right now.
0byrnema13y
By 'healthy', I did mean evolutionarily successful. However, I wouldn't go to great lengths to defend the statement, so I think you did catch me saying something I didn't entirely mean. Someone can be intellectual and emotionally detached at times, and this can help someone make more rational choices. However, if someone is too emotionally detached they don't empathize with other people (or even themselves) and don't care about their goals. So I meant something more general like apathy than lack of empathy. So my claim is that biological stop-guards prevent us from being too apathetic about external reality. (For example, when I imagine wireheading, I start empathizing with all the people I'm abandoning. In general, a person should feel the tug of all their unmet values and goals.)
0[anonymous]13y
Ok, then I misunderstood you and we do in fact agree.
[-][anonymous]12y00

What ever makes you happy! - Wait a minute. . .

No but seriously as been pointed out, one problem with wireheading is that it tend to resemble death in the sense that you stop being a person, since you stop caring about everything you used to care about (as well as act upon) such as finding out why other people don't like the whole idea of head wiring, you are just in a kind of bliss-stasis. I don't see much difference between me being head-wired and me being shot in the head, then someone/something building something that is put into bliss-stasis, since ... (read more)

1[anonymous]12y
I'm basically in complete agreement with Blackmore, yes. I also don't consider wireheading death, at least not any more than any other event. I've been in states of raw concentration that had no thought processes, no memory and no spatial or temporal perception going on, but I still perceived a specific emotion. I don't think these states killed me, any more than blinking kills me. If it's meaningful to say I've been the same person throughout meditation, then I'm the same person when wireheaded. (However, I would rather agree with Blackmore that no continuity exists, ever, though I suspect that would mostly be a disagreement about semantics.) I don't see how caring-about-the-narrative-center is essential to having-a-narrative-center. I can still tell fiction about a wirehead, even a static one. The wirehead themselves might not, but that doesn't change the truth of the fiction. It seems to me that you can either reject this particular fiction (in which case I'd be interested in your reasons, not so much as justification, but more to understand how we came to differ), or you care about perceiving-the-fiction, independent of truth, in which case Omega will take care to include "this is making narrative sense" into your customized bliss. (Disclaimer: I'm not endorsing muflax_june2011's views anymore, though I'm still sympathetic to some forms of wireheading. muflax_june2011 would've just answered "screw being a person!" and rejected the whole psychological unity thing. muflax_october2011 is not so sure about that anymore.)
0[anonymous]12y
Well being in a state of raw concentration I consider somewhat as having your car in the garage, right now I’m not driving it, but it has the capabilities needed to be driven, capabilities that are regularly exercised, a person is not one brain/mental state but a repertoire of brain/mental states with diffrent temporal distribution. But I guess that you could ask Omega to configure the wiring machinery to have a multitude of states that retained to some degree what “I” would call “me”. My beef is with the whole idea of bliss-ing out of existence.

While there's no particular reason not to go orgasmium, there's also no particular reason to go there.

We already know that wanting and desiring are expressed by two different, although overlapping, neural circuitries, so it is natural that we may value things that doesn't directly feedback pleasure. Let's say that we find meaningful to be an active part of the world, that we assign a general utility to our actions as they effect the environment: descending into an orgasmium state would kill this effectiveness.

The general argument is that orgasmium can be c... (read more)

4Vaniver13y
In the choice between "some control over reality" and "maximum pleasure," it seems to me that "maximum pleasure" comes highly recommended.
0MrMind13y
Ceteribus paribus yes, of course, but not if "achieve maximum pleasure" violates some moral code or drastically diminishes something else you might value, as could be in the case of orgasmium collapse.
1Vaniver13y
I commented that because your first sentence seemed odd- there may be no one reason to not go orgasmium, but there's only one reason to go orgasmium.

There are many situations over which I would prefer a wirehead state. For example I would prefer to be orgasmium than paperclips. But it isn't exactly inspiring. I like novelty.

This is arbitrary.

[-][anonymous]13y00

Our true self is the far one, not the near one. (Why? The opposite is equally plausible. Or the falsehood of the near/far model in general.

Because any decision to self-modify will more likley than not be determined more by my future self.

0Thomas13y
It can't be. Only by the current self.
0[anonymous]13y
If I sign up to be uploaded into a computer six months from now, which part of me made the decision? My current self is biased towards keeping previous commitments. By the time technology and society change enough for self-modification to be a current self affair, current self will already be heavily modified.
0Thomas13y
Non the less. The current makes decisions in expecting something from the decisions made. There is no time reversal causality here.
0[anonymous]13y
Perhaps this is clearer formulation: Because any decision to self-modify will more likley than not be determined more by my future self['s values than my current self's values].

I see wireheading as another problem that is the result of utility maximization. The question is, can utility be objectively grounded for an agent? If that is possible, wireheading might be objectively rational for a human utility maximizer.

Consider what it would mean if utility was ultimately measured in some unit of bodily sensations. We do what we do for what it does with us (our body (brain)). We do what we do because it makes us feel good, and bad if we don't do it. It would be rational to fake the realization of our goals, to receive the good feeling... (read more)

3[anonymous]13y
I agree that if humans made decisions based on utility calculations that aren't grounded in direct sensations, then that'd be a good argument against wireheading. I see, however, no reason to believe that humans actually do such things, except that it would make utilitarianism look really neat and practical. (The fact that currently no-one actually manages to act based on utilitarianism of any kind seems like evidence against it.) It doesn't look realistic to me. People rarely sacrifice themselves for causes and it always requires tons of social pressure. (Just look at suicide bombers.) Their actual motivations are much more nicely explained in terms of the sensations (anticipated and real) they get out of it. Assuming faulty reasoning, conflicting emotional demands and just plain confabulation for the messier cases seems like the simpler hypothesis, as we already know all those things exist and are the kinds of things evolution would produce. Whenever I encounter a thought of the sort "I value X, objectively", I always manage to dig into it and find the underlying sensations that give it that value. If it put them on hold (or realize that they are mistakenly attached, as X wouldn't actually cause those sensations I expect), then that value disappears. I can see my values grounded in sensations, I can't manage to find any others. Models based on that assumption seem to work just fine (like PCT), so I'm not sure I'm actually missing something.

So you want to wirehead. Do you think you'll have access to that technology in your lifetime?

-1[anonymous]13y
To be explicit about this, I don't have an opinion on whether I'd choose it, but I do find it attractive. Just repeating this because everyone seems to think I'm advocating it and so I probably didn't make this clear enough. But your actual question: Basically, I think it's long here. The Tibetans in particular have developed some neat techniques that are still somewhat time-intensive to learn, but work reasonably well. The route (several specific forms of) meditation + basic hedonism seems like a decent implementation, especially because I already know most of the underlying techniques. Also, MDMA and related drugs and basic implants already exist, though they're still fairly crude and hard to sustain. I'd expect the technology for "good enough" wireheading through direct stimulation to be available in at most 20 years, though probably not commercially.
-2sdenheyer13y
Chronic MDMA use causes a decrease in concentration of serotonin transporters. Lottery winners end up no where near as happy, long-term, as they imagined they would be when they bought the ticket (Brickman, Coates, Janoff-Bulman 1978). This is weak evidence, but it suggests that wire-heading in practice isn't going to look like it does in the thought experiment - I imagine neural down-regulation would play a part.

You are right, muflax. There just isn't any explanation for badness of the "wireheading".

The case is the following:

Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.

Did you read this?

Neurologically, wanting and liking are two separate albeit very related things. When you measure liking and wanting, you find that you can manipulate the two separately.

If your dopamine receptors are blocked, you don't want things as badly, even though your enjoy them equally well. If you increase dopamine, you (or experimental rats) work harder for something, even though you don't enjoy it more when you get it.

I have the subjective impression that when I'm happy for no particularly external reason, I still want to do things that I previo... (read more)

1Kaj_Sotala13y
That was the first thing his list of arguments linked to.
1[anonymous]13y
I don't think that addresses the substance of the argument. Wireheading doesn't have to be about increasing dopamine; what if you were wireheaded to really, really like being wireheaded? And, in case it mattered, not to like anything else, so you don't have any regrets about the wireheading. The "Much Better Life" scenario is even more different; here, you presumably continue wanting and liking much the same things that you used to, unless you choose to self-modify later, you just get rid of the frustrating parts of life at the expense of no longer living in the real world.
[-][anonymous]13y-30

I've been thinking about torture and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can't understand these claims at all.

To clarify, I mean torture in the strict "collapsing into painium" sense. A successful implementation would identify all the punishment circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved box jellyfish. A good argument for either keeping complex values (e.g. by requiring at least a personal m... (read more)