How familiar are you with expected utility maximizers? Do you know about the difference between motivation and reward (or "wanting" and "liking") in the brain?
I think I'm familiar with that and understand the difference. I don't see it's relevance. Assuming "wanting" is basically the dopamine version of "liking" seems more plausible and strictly simpler than assuming there's a really complex hypothetical calculation based on states of the world being performed.
Also, I suspect you are understanding wireheading as too narrow here. It's not just the pleasure center (or even just some part of it, like in "inducing permanent orgasms"), but it would take care of all desirable sensations, including the sensation of having one's wants fulfilled. The intuition "I get wireheaded and still feel like I want something else" is false, which is why I used "rewards" instead of "pleasure". (And it doesn't require rewiring one's preferences.)
But the trouble with this argument is that it doesn't take into account other sorts of evidence, with the most notable being the output of our self-modeling processes. If I could, I wouldn't.
Confabulation and really bad introspective access seem much more plausible to me. If you modify details in thought experiments that shouldn't affect wireheading results (like reversing Nozick's experience machine), people do actually change their answers, even though they previously claimed to have based their decisions on criteria that clearly can't have mattered.
I'd much rather side with revealed preferences, which show that plenty of people are interested in crude wireheading (heroin, WoW and FarmVille come to mind) and the better those options get, the more people choose them.
Assuming "wanting" is basically the dopamine version of "liking" seems more plausible and strictly simpler
Why assume? It's there in the brain. It's okay to model reality with simpler stuff sometimes, but to look at reality and say "not simple enough" is bad. The model that says "it would be rewarding, therefore I must want it" is too simple.
than assuming there's a really complex hypothetical calculation based on states of the world being performed.
Except the brain is a computer that processes data from sen...
I've been thinking about wireheading and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can't understand these claims at all.
To clarify, I mean wireheading in the strict "collapsing into orgasmium" sense. A successful implementation would identify all the reward circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved heroin. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.
Also, I use "reward" as short-hand for any enjoyable feeling, as "pleasure" tends to be used for a specific one of them, among bliss, excitement and so on, and "it's not about feeling X, but X and Y" is still wireheading after all.
I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn't find other (not blatantly incoherent) arguments.)
(There have also been technical arguments against specific implementations of wireheading. I'm not concerned with those, as long as they don't show impossibility.)
Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don't even work for me.
It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don't see. I fear that I have a large inferential gap here, so please be explicit and assume I'm a Martian. I genuinely feel like Gamma in A Much Better Life.
To me, all this talk about "valueing something" sounds like someone talking about "feeling the presence of the Holy Ghost". I don't mean this in a derogatory way, but the pattern "sense something funny, therefore some very specific and otherwise unsupported claim" matches. How do you know it's not just, you know, indigestion?
What is this "valuing"? How do you know that something is a "value", terminal or not? How do you know what it's about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these "values" come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
To me, it seems like it's all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it's equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I "wanted" something, but getting it didn't trigger a rewarding feeling, I wouldn't take that as evidence that I "value" the activity for its own sake. I'd assume I suck at predicting or was ripped off.
Can someone give a reason why wireheading would be bad?