muflax comments on Why No Wireheading? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
I agree that if humans made decisions based on utility calculations that aren't grounded in direct sensations, then that'd be a good argument against wireheading.
I see, however, no reason to believe that humans actually do such things, except that it would make utilitarianism look really neat and practical. (The fact that currently no-one actually manages to act based on utilitarianism of any kind seems like evidence against it.) It doesn't look realistic to me. People rarely sacrifice themselves for causes and it always requires tons of social pressure. (Just look at suicide bombers.) Their actual motivations are much more nicely explained in terms of the sensations (anticipated and real) they get out of it. Assuming faulty reasoning, conflicting emotional demands and just plain confabulation for the messier cases seems like the simpler hypothesis, as we already know all those things exist and are the kinds of things evolution would produce.
Whenever I encounter a thought of the sort "I value X, objectively", I always manage to dig into it and find the underlying sensations that give it that value. If it put them on hold (or realize that they are mistakenly attached, as X wouldn't actually cause those sensations I expect), then that value disappears. I can see my values grounded in sensations, I can't manage to find any others. Models based on that assumption seem to work just fine (like PCT), so I'm not sure I'm actually missing something.