When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery. I approached him after his talk and said, "I agree that such pills are probably possible, but I wouldn't voluntarily take them."
And Stock said, "But they'll be so much better that the real thing won't be able to compete. It will just be way more fun for you to take the pills than to do all the actual scientific work."
And I said, "I agree that's possible, so I'll make sure never to take them."
Stock seemed genuinely surprised by my attitude, which genuinely surprised me.
One often sees ethicists arguing as if all human desires are reducible, in principle, to the desire for ourselves and others to be happy. (In particular, Sam Harris does this in The End of Faith, which I just finished perusing - though Harris's reduction is more of a drive-by shooting than a major topic of discussion.)
This isn't the same as arguing whether all happinesses can be measured on a common utility scale - different happinesses might occupy different scales, or be otherwise non-convertible. And it's not the same as arguing that it's theoretically impossible to value anything other than your own psychological states, because it's still permissible to care whether other people are happy.
The question, rather, is whether we should care about the things that make us happy, apart from any happiness they bring.
We can easily list many cases of moralists going astray by caring about things besides happiness. The various states and countries that still outlaw oral sex make a good example; these legislators would have been better off if they'd said, "Hey, whatever turns you on." But this doesn't show that all values are reducible to happiness; it just argues that in this particular case it was an ethical mistake to focus on anything else.
It is an undeniable fact that we tend to do things that make us happy, but this doesn't mean we should regard the happiness as the only reason for so acting. First, this would make it difficult to explain how we could care about anyone else's happiness - how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction.
Second, just because something is a consequence of my action doesn't mean it was the sole justification. If I'm writing a blog post, and I get a headache, I may take an ibuprofen. One of the consequences of my action is that I experience less pain, but this doesn't mean it was the only consequence, or even the most important reason for my decision. I do value the state of not having a headache. But I can value something for its own sake and also value it as a means to an end.
For all value to be reducible to happiness, it's not enough to show that happiness is involved in most of our decisions - it's not even enough to show that happiness is the most important consequent in all of our decisions - it must be the only consequent. That's a tough standard to meet. (I originally found this point in a Sober and Wilson paper, not sure which one.)
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I'd have to say no. I can't think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
The value of scientific discovery requires both a genuine scientific discovery, and a person to take joy in that discovery. It may seem difficult to disentangle these values, but the pills make it clearer.
I would be disturbed if people retreated into holodecks and fell in love with mindless wallpaper. I would be disturbed even if they weren't aware it was a holodeck, which is an important ethical issue if some agents can potentially transport people into holodecks and substitute zombies for their loved ones without their awareness. Again, the pills make it clearer: I'm not just concerned with my own awareness of the uncomfortable fact. I wouldn't put myself into a holodeck even if I could take a pill to forget the fact afterward. That's simply not where I'm trying to steer the future.
I value freedom: When I'm deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts. The presence or absence of an external puppet master can affect my valuation of an otherwise fixed outcome. Even if people wouldn't know they were being manipulated, it would matter to my judgment of how well humanity had done with its future. This is an important ethical issue, if you're dealing with agents powerful enough to helpfully tweak people's futures without their knowledge.
So my values are not strictly reducible to happiness: There are properties I value about the future that aren't reducible to activation levels in anyone's pleasure center; properties that are not strictly reducible to subjective states even in principle.
Which means that my decision system has a lot of terminal values, none of them strictly reducible to anything else. Art, science, love, lust, freedom, friendship...
And I'm okay with that. I value a life complicated enough to be challenging and aesthetic - not just the feeling that life is complicated, but the actual complications - so turning into a pleasure center in a vat doesn't appeal to me. It would be a waste of humanity's potential, which I value actually fulfilling, not just having the feeling that it was fulfilled.
You misunderstood the first point. I did not claim you succeed at tasks you are good at. I claimed that if you define desire by "what you do", and simultaneously believe that "satisfying your desires -> happiness", then succeeding at the tasks you attempt would cause happiness. Yet that is an incomplete descriptor of happiness.
Additionally, I obviously agree people have competing desires. But this makes it impossible to use "what I did" as a measurement of "what I want". For instance, if I want to run but don't, it may be due to laziness (which is hardly a "desire for slack"), fear (which is not merely a "desire to avoid risk or embarrassment"), etc.
Your lottery description is inconsistent with other accomplishments and pleasures. For instance, people who marry [the right person] do not simply become habituated to the new pleasures and establish a new baseline. People with good or bad jobs do not become entirely habituated to those jobs - they derive happiness and unhappiness from them every day. The lottery is a different story from these, and you'll need to come up with a better explanation as to why it is different. My explanation is that we derive happiness from earning success, but not from being given it arbitrarily, and that regardless of one's desires human nature tends to behave that way.
This is my first counterexample to your puzzle: regardless of whether one has a desire to have to earn success (and most people desire not to have to earn it), we are made happy by earning success. Other examples: we are made happy by hard work (even unsuccessful hard work), by being punished when we deserve it, by putting on a smile (even against one's will), and by many other things we don't desire and some that we try to avoid.
Thank you; you've made some very good points that deserve a proper reply. However it's getting late here and I will need more energy go over this properly. I'll definitely consider this.
As a quick opener, because I think there's an open point here: It seems to me that all emotions serve as behavioral feedback mechanisms. But even if I am mistaken on that, and/or happiness is not desire fulfillment feedback, what would you think its evolutionary role is? It's clearly not an arbitrary component. Not to make the fallacy that any explanation is better than no explanation, I would nevertheless be interested in playing off this hypothesis against something other than a null model - a competing explanation. Can you offer one?