I've been thinking about wireheading and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can't understand these claims at all.
To clarify, I mean wireheading in the strict "collapsing into orgasmium" sense. A successful implementation would identify all the reward circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved heroin. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.
Also, I use "reward" as short-hand for any enjoyable feeling, as "pleasure" tends to be used for a specific one of them, among bliss, excitement and so on, and "it's not about feeling X, but X and Y" is still wireheading after all.
I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn't find other (not blatantly incoherent) arguments.)
- People do not always optimize their actions based on achieving rewards. (People also are horrible at making predictions and great at rationalizing their failures afterwards.)
- It is possible to enjoy doing something while wanting to stop or vice versa, do something without enjoying it while wanting to continue. (Seriously? I can't remember ever doing either. What makes you think that the action is thus valid, and you aren't just making mistaken predictions about rewards or are being exploited? Also, Mind Projection Fallacy.)
- A wireheaded "me" wouldn't be "me" anymore. (What's this "self" you're talking about? Why does it matter that it's preserved?)
- "I don't want it and that's that." (Why? What's this "wanting" you do? How do you know what you "want"? (see end of post))
- People, if given a hypothetical offer of being wireheaded, tend to refuse. (The exact result depends heavily on the exact question being asked. There are many biases at work here and we normally know better than to trust the majority intuition, so why should we trust it here?)
- Far-mode predictions tend to favor complex, external actions, while near-mode predictions are simpler, more hedonistic. Our true self is the far one, not the near one. (Why? The opposite is equally plausible. Or the falsehood of the near/far model in general.)
- If we imagine a wireheaded future, it feels like something is missing or like we won't really be happy. (Intuition pump.)
- It is not socially acceptable to embrace wireheading. (So what? Also, depends on the phrasing and society in question.)
(There have also been technical arguments against specific implementations of wireheading. I'm not concerned with those, as long as they don't show impossibility.)
Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don't even work for me.
It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don't see. I fear that I have a large inferential gap here, so please be explicit and assume I'm a Martian. I genuinely feel like Gamma in A Much Better Life.
To me, all this talk about "valueing something" sounds like someone talking about "feeling the presence of the Holy Ghost". I don't mean this in a derogatory way, but the pattern "sense something funny, therefore some very specific and otherwise unsupported claim" matches. How do you know it's not just, you know, indigestion?
What is this "valuing"? How do you know that something is a "value", terminal or not? How do you know what it's about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these "values" come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
To me, it seems like it's all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it's equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I "wanted" something, but getting it didn't trigger a rewarding feeling, I wouldn't take that as evidence that I "value" the activity for its own sake. I'd assume I suck at predicting or was ripped off.
Can someone give a reason why wireheading would be bad?
Think about a paper-clip maximiser (people tend get silly about morality, and a lot less silly about paper-clips so its a useful thought experiment for meta-ethics in general). Its a simple design, it lists all the courses of action it could take, computes the expected_paper-clips given each one using its model of the world, and then takes the one that gives the largest result. It isn't interested in the question of why paper-clips are valuable, it just produces them.
So, does it value paper-clips, or does it just value expected paper-clips?
Consider how it reacts to the option "update your current model of the world to set Expected paper-clips = BB(1000)". This will appear on its list of possible actions, so what is its value?
(expected paperclips | "update your current model of the world to set Expected paper-clips = BB(1000)")
The answer is a lot less than BB(1000). Its current model of the world states that updating its model does not actually change reality (except insofar as the model is part of reality). Thus it does not predict that this action will result in the creation of any new paper-clips, so its expected paper-clips is roughly equal to the number of paper-clips that get produced anyway.
Expected expected paper-clips given this action is very large, but the paper-clipper doesn't give a rat's arse about that.
Hopefully, I have convinced you that that there is a difference between caring about some aspect of the world and using your internal model to predict that aspect, versus caring about your internal model. Furthermore, in the space of all possible minds the vast majority are in the first category, since an agent's own mind is generally only a tiny portion of the world, so if humans value both then it is the internal part that makes us unusual.
I can't make you value something any more than I can make a rock value it, the best I can do is convince you that you are allowed to value non-wireheading, and if you don't feel like you want it then it is privileging the hypothesis to even consider the possibility that you do.
That depends on the exact implementation. The paperclipper might be purely feedback-driven, essentially a paperclip-thermostat. In that case, it will simulate setting its internal variables to BB(1000), that will create huge positive feedback and it happily wireheads itself. Or it migh... (read more)