I've been thinking about wireheading and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can't understand these claims at all.
To clarify, I mean wireheading in the strict "collapsing into orgasmium" sense. A successful implementation would identify all the reward circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved heroin. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.
Also, I use "reward" as short-hand for any enjoyable feeling, as "pleasure" tends to be used for a specific one of them, among bliss, excitement and so on, and "it's not about feeling X, but X and Y" is still wireheading after all.
I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn't find other (not blatantly incoherent) arguments.)
- People do not always optimize their actions based on achieving rewards. (People also are horrible at making predictions and great at rationalizing their failures afterwards.)
- It is possible to enjoy doing something while wanting to stop or vice versa, do something without enjoying it while wanting to continue. (Seriously? I can't remember ever doing either. What makes you think that the action is thus valid, and you aren't just making mistaken predictions about rewards or are being exploited? Also, Mind Projection Fallacy.)
- A wireheaded "me" wouldn't be "me" anymore. (What's this "self" you're talking about? Why does it matter that it's preserved?)
- "I don't want it and that's that." (Why? What's this "wanting" you do? How do you know what you "want"? (see end of post))
- People, if given a hypothetical offer of being wireheaded, tend to refuse. (The exact result depends heavily on the exact question being asked. There are many biases at work here and we normally know better than to trust the majority intuition, so why should we trust it here?)
- Far-mode predictions tend to favor complex, external actions, while near-mode predictions are simpler, more hedonistic. Our true self is the far one, not the near one. (Why? The opposite is equally plausible. Or the falsehood of the near/far model in general.)
- If we imagine a wireheaded future, it feels like something is missing or like we won't really be happy. (Intuition pump.)
- It is not socially acceptable to embrace wireheading. (So what? Also, depends on the phrasing and society in question.)
(There have also been technical arguments against specific implementations of wireheading. I'm not concerned with those, as long as they don't show impossibility.)
Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don't even work for me.
It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don't see. I fear that I have a large inferential gap here, so please be explicit and assume I'm a Martian. I genuinely feel like Gamma in A Much Better Life.
To me, all this talk about "valueing something" sounds like someone talking about "feeling the presence of the Holy Ghost". I don't mean this in a derogatory way, but the pattern "sense something funny, therefore some very specific and otherwise unsupported claim" matches. How do you know it's not just, you know, indigestion?
What is this "valuing"? How do you know that something is a "value", terminal or not? How do you know what it's about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these "values" come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
To me, it seems like it's all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it's equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I "wanted" something, but getting it didn't trigger a rewarding feeling, I wouldn't take that as evidence that I "value" the activity for its own sake. I'd assume I suck at predicting or was ripped off.
Can someone give a reason why wireheading would be bad?
Question reversal: suppose Omega reveals to you that your life has been a simulation. Your actions inside the simulation don't affect the outside, 'real' world - nobody is watching you.
However, Omega offers to remove you from the simulation and instantiate you in the real world outside. Unfortunately, Omega predicts that your future life on the outside won't be nearly as fun as the one you've had in the simulation up until now. The difference in satisfaction - including satisfying your preferences that apply to "affecting the 'real' world" - may be as great as the possible improvement due to wireheading...
Would you accept the offer and risk a life of extreme misery to improve your chance of affecting the "real" world? Would you consider yourself "dead" if you knew you were being simulated?
(Apologies for replying late.)
I would accept Omega's offer to 'pop' me up a level. I would accept even if it meant misery and pain. I would always accept this offer. Actually, bar that. I would accept the offer conditional on the fact that I'd be able to impact the 'real' world more outside the simulation than inside. I'd be comfortable staying in my current level if it was providing some useful effect in the higher levels of reality that I couldn't provide if I were 'popped' out.
Upon learning I ... (read more)