Why do you think humans are best understood as such utility maximizers? If we were, shouldn't everyone have an aversion, or rather, indifference to wireheading? After all, if you offered an expected paperclip maximizer the option of wireheading, it would simply reject it as if you had offered to build a bunch of staples. It would have no strong reaction either way. That isn't what's happening with humans.
I'm trying to think of a realistic complex utility function that would predict such behavior, but can't think of anything.
Yeah, true. For humans, pleasure is at least a consideration. I guess I see it as part of our brain structure used in learning, a part that has acquired its own purpose because we're adaptation-executers, not fitness maximizers. But then, so is liking science, so it's not like I'm dismissing it. If I had a utility function, pleasure would definitely be in there.
So how do you like something without having it be all-consuming? First, care about other things too - I have terms in my hypothetical utility function that refer to external reality. Second, have there be a maximum possible effect - either because there is a maximum amount of reward we can feel, or because what registers in the brain as "reward" quickly decreases in value as you get more of it. Third, have the other stuff you care about outweigh just pursuing the one term to its maximum.
I actually wrote a comment about this recently, which is an interesting coincidence :D I've become more and more convinced that a bounded utility function is most human-like. The question is then whether the maximum possible utility from internal reward outweighs everyday values of everything else or not.
I agree with you on the bounded utility function.
I still need to think more about whether expected utility maximizers are a good human model. My main problem is that I can't see realistic implementations in the brain (and pathways for evolution to get them there). I'll focus my study more on that; I think I dismissed them too easily.
I've been thinking about wireheading and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can't understand these claims at all.
To clarify, I mean wireheading in the strict "collapsing into orgasmium" sense. A successful implementation would identify all the reward circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved heroin. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.
Also, I use "reward" as short-hand for any enjoyable feeling, as "pleasure" tends to be used for a specific one of them, among bliss, excitement and so on, and "it's not about feeling X, but X and Y" is still wireheading after all.
I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn't find other (not blatantly incoherent) arguments.)
(There have also been technical arguments against specific implementations of wireheading. I'm not concerned with those, as long as they don't show impossibility.)
Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don't even work for me.
It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don't see. I fear that I have a large inferential gap here, so please be explicit and assume I'm a Martian. I genuinely feel like Gamma in A Much Better Life.
To me, all this talk about "valueing something" sounds like someone talking about "feeling the presence of the Holy Ghost". I don't mean this in a derogatory way, but the pattern "sense something funny, therefore some very specific and otherwise unsupported claim" matches. How do you know it's not just, you know, indigestion?
What is this "valuing"? How do you know that something is a "value", terminal or not? How do you know what it's about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these "values" come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
To me, it seems like it's all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it's equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I "wanted" something, but getting it didn't trigger a rewarding feeling, I wouldn't take that as evidence that I "value" the activity for its own sake. I'd assume I suck at predicting or was ripped off.
Can someone give a reason why wireheading would be bad?