You seem to classify each argument against wireheading as a bias: since the argument doesn't persuade you, the ones who are persuaded must make some error in judgement.
I did not intend this. I simply find them all very unconvincing and (briefly) gave my reasons why. I assume that at least some of them rely on hidden assumptions I don't see and only look like an error to me. I don't have an opinion on wireheading either way (I'm deliberately suspending any judgment), but I can only see good arguments for it, but none against it. If that were really the case, I would expect many more experienced rationalists to be convinced of it (and I highly respect the opinions of pretty much everyone I linked to), so I'm operating on the assumption of an inferential gap.
[about akrasia] Now there are some cynics (very often found among economists) who say that this is a confusion - that people always want what they really do, by definition, and the perceived wanting is a self-serving illusion useful for signalling purposes. Well, I don't agree.
I don't think that's cynical and I do find it very plausible. Explaining akrasia (which I do have) in terms of being mistaken what I like and having a (often unconscious) conflict between different parts of the brain works just fine for me. The moment I realize I'm not actually enjoying what I do, I either stop immediately or find that I'm fulfilling some other emotional demand, typically avoiding guilt or embarrassment.
Intuition pumps are legitimate sort of argument.
No, just no, especially if they give different results based on minor modifications, like with Nozick's experience machine. (Or look at the reactions to Eliezer's Three Worlds Collide and various failed utopias.) I'd rather have no opinion than base it on a complex intuition pump.
Your comments on 1) and 8) I agree with. The other points I addressed in other comments here, I think.
I assume that at least some of them rely on hidden assumptions I don't see and only look like an error to me. ... I'm operating on the assumption of an inferential gap.
I don't think there is an inferential gap of the usual type (i.e. implicit hidden knowledge of facts or arguments). It's more probably a value disagreement, made harder by your objection to well-definedness of "value".
...Explaining akrasia (which I do have) in terms of being mistaken what I like and having a (often unconscious) conflict between different parts of the brain works
I've been thinking about wireheading and the nature of my values. Many people here have defended the importance of external referents or complex desires. My problem is, I can't understand these claims at all.
To clarify, I mean wireheading in the strict "collapsing into orgasmium" sense. A successful implementation would identify all the reward circuitry and directly stimulate it, or do something equivalent. It would essentially be a vastly improved heroin. A good argument for either keeping complex values (e.g. by requiring at least a personal matrix) or external referents (e.g. by showing that a simulation can never suffice) would work for me.
Also, I use "reward" as short-hand for any enjoyable feeling, as "pleasure" tends to be used for a specific one of them, among bliss, excitement and so on, and "it's not about feeling X, but X and Y" is still wireheading after all.
I tried collecting all related arguments I could find. (Roughly sorted from weak to very weak, as I understand them, plus link to example instances. I also searched any literature/other sites I could think of, but didn't find other (not blatantly incoherent) arguments.)
(There have also been technical arguments against specific implementations of wireheading. I'm not concerned with those, as long as they don't show impossibility.)
Overall, none of this sounds remotely plausible to me. Most of it is outright question-begging or relies on intuition pumps that don't even work for me.
It confuses me that others might be convinced by arguments of this sort, so it seems likely that I have a fundamental misunderstanding or there are implicit assumptions I don't see. I fear that I have a large inferential gap here, so please be explicit and assume I'm a Martian. I genuinely feel like Gamma in A Much Better Life.
To me, all this talk about "valueing something" sounds like someone talking about "feeling the presence of the Holy Ghost". I don't mean this in a derogatory way, but the pattern "sense something funny, therefore some very specific and otherwise unsupported claim" matches. How do you know it's not just, you know, indigestion?
What is this "valuing"? How do you know that something is a "value", terminal or not? How do you know what it's about? How would you know if you were mistaken? What about unconscious hypocrisy or confabulation? Where do these "values" come from (i.e. what process creates them)? Overall, it sounds to me like people are confusing their feelings about (predicted) states of the world with caring about states directly.
To me, it seems like it's all about anticipating and achieving rewards (and avoiding punishments, but for the sake of the wireheading argument, it's equivalent). I make predicitions about what actions will trigger rewards (or instrumentally help me pursue those actions) and then engage in them. If my prediction was wrong, I drop the activity and try something else. If I "wanted" something, but getting it didn't trigger a rewarding feeling, I wouldn't take that as evidence that I "value" the activity for its own sake. I'd assume I suck at predicting or was ripped off.
Can someone give a reason why wireheading would be bad?