When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery. I approached him after his talk and said, "I agree that such pills are probably possible, but I wouldn't voluntarily take them."
And Stock said, "But they'll be so much better that the real thing won't be able to compete. It will just be way more fun for you to take the pills than to do all the actual scientific work."
And I said, "I agree that's possible, so I'll make sure never to take them."
Stock seemed genuinely surprised by my attitude, which genuinely surprised me.
One often sees ethicists arguing as if all human desires are reducible, in principle, to the desire for ourselves and others to be happy. (In particular, Sam Harris does this in The End of Faith, which I just finished perusing - though Harris's reduction is more of a drive-by shooting than a major topic of discussion.)
This isn't the same as arguing whether all happinesses can be measured on a common utility scale - different happinesses might occupy different scales, or be otherwise non-convertible. And it's not the same as arguing that it's theoretically impossible to value anything other than your own psychological states, because it's still permissible to care whether other people are happy.
The question, rather, is whether we should care about the things that make us happy, apart from any happiness they bring.
We can easily list many cases of moralists going astray by caring about things besides happiness. The various states and countries that still outlaw oral sex make a good example; these legislators would have been better off if they'd said, "Hey, whatever turns you on." But this doesn't show that all values are reducible to happiness; it just argues that in this particular case it was an ethical mistake to focus on anything else.
It is an undeniable fact that we tend to do things that make us happy, but this doesn't mean we should regard the happiness as the only reason for so acting. First, this would make it difficult to explain how we could care about anyone else's happiness - how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction.
Second, just because something is a consequence of my action doesn't mean it was the sole justification. If I'm writing a blog post, and I get a headache, I may take an ibuprofen. One of the consequences of my action is that I experience less pain, but this doesn't mean it was the only consequence, or even the most important reason for my decision. I do value the state of not having a headache. But I can value something for its own sake and also value it as a means to an end.
For all value to be reducible to happiness, it's not enough to show that happiness is involved in most of our decisions - it's not even enough to show that happiness is the most important consequent in all of our decisions - it must be the only consequent. That's a tough standard to meet. (I originally found this point in a Sober and Wilson paper, not sure which one.)
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I'd have to say no. I can't think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
The value of scientific discovery requires both a genuine scientific discovery, and a person to take joy in that discovery. It may seem difficult to disentangle these values, but the pills make it clearer.
I would be disturbed if people retreated into holodecks and fell in love with mindless wallpaper. I would be disturbed even if they weren't aware it was a holodeck, which is an important ethical issue if some agents can potentially transport people into holodecks and substitute zombies for their loved ones without their awareness. Again, the pills make it clearer: I'm not just concerned with my own awareness of the uncomfortable fact. I wouldn't put myself into a holodeck even if I could take a pill to forget the fact afterward. That's simply not where I'm trying to steer the future.
I value freedom: When I'm deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts. The presence or absence of an external puppet master can affect my valuation of an otherwise fixed outcome. Even if people wouldn't know they were being manipulated, it would matter to my judgment of how well humanity had done with its future. This is an important ethical issue, if you're dealing with agents powerful enough to helpfully tweak people's futures without their knowledge.
So my values are not strictly reducible to happiness: There are properties I value about the future that aren't reducible to activation levels in anyone's pleasure center; properties that are not strictly reducible to subjective states even in principle.
Which means that my decision system has a lot of terminal values, none of them strictly reducible to anything else. Art, science, love, lust, freedom, friendship...
And I'm okay with that. I value a life complicated enough to be challenging and aesthetic - not just the feeling that life is complicated, but the actual complications - so turning into a pleasure center in a vat doesn't appeal to me. It would be a waste of humanity's potential, which I value actually fulfilling, not just having the feeling that it was fulfilled.
The basic point of the article seems to be "Not all utilons are (reducible to) hedons", which confuses me from the start. If happiness is not a generic term for "perception of a utilon-positive outcome", what is it? I don't think all utilons can be reduced to hedons, but that's only because I see no difference between the two. I honestly don't comprehend the difference between "State A makes me happier than state B" and "I value state A more than state B". If hedons aren't exactly equivalent to utilons, what are they?
An example might help: I was arguing with a classmate of mine recently. My claim was that every choice he made boiled down to the option which made him happiest. Looking back on it, I meant to say it was the option whose anticipation gave him the most happiness, since making choices based on the result of those choices breaks causality. Anyway, he argued that his choices were not based on happiness. He put forth the example that, while he didn't enjoy his job, he still went because he needed to support his son. My response was that while his reaction to his job as an isolated experience was negative, his happiness from {job + son eating} was more than his happiness from {no job + son starving}.
I thought at the time that we were disagreeing about basic motivations, but this article and its responses have caused me to wonder if, perhaps, I don't use the word 'happiness' in the standard sense.
Giving a hyperbolic thought excercise: If I could choose between all existing minds (except mine, to make the point about relative values) experiencing intense agony for a year and my own death, I think I'd be likely to choose my death. This is not because I expect to experience happiness after death, but because considering the state of the universe in the second scenario brings me more happiness than considering the state of the universe in the first. As far as I can tell, this is exactly what it means to place a higher value on the relative pleasure and continuing functionality of all-but-one mind than on my own continued existence.
To anyone who argues that utilons aren't exactly equivalent to hedons (either that utilons aren't hedons or that utilons are reducible to hedons), please explain to me what you (and my sudden realisation that you exist allows me to realise you seem amazingly common) think happiness is.
Consider the following two world states:
The hedonic scores for 1 and 2 are identical, but 2 has more utilons if you value your friend's life.