When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery. I approached him after his talk and said, "I agree that such pills are probably possible, but I wouldn't voluntarily take them."
And Stock said, "But they'll be so much better that the real thing won't be able to compete. It will just be way more fun for you to take the pills than to do all the actual scientific work."
And I said, "I agree that's possible, so I'll make sure never to take them."
Stock seemed genuinely surprised by my attitude, which genuinely surprised me.
One often sees ethicists arguing as if all human desires are reducible, in principle, to the desire for ourselves and others to be happy. (In particular, Sam Harris does this in The End of Faith, which I just finished perusing - though Harris's reduction is more of a drive-by shooting than a major topic of discussion.)
This isn't the same as arguing whether all happinesses can be measured on a common utility scale - different happinesses might occupy different scales, or be otherwise non-convertible. And it's not the same as arguing that it's theoretically impossible to value anything other than your own psychological states, because it's still permissible to care whether other people are happy.
The question, rather, is whether we should care about the things that make us happy, apart from any happiness they bring.
We can easily list many cases of moralists going astray by caring about things besides happiness. The various states and countries that still outlaw oral sex make a good example; these legislators would have been better off if they'd said, "Hey, whatever turns you on." But this doesn't show that all values are reducible to happiness; it just argues that in this particular case it was an ethical mistake to focus on anything else.
It is an undeniable fact that we tend to do things that make us happy, but this doesn't mean we should regard the happiness as the only reason for so acting. First, this would make it difficult to explain how we could care about anyone else's happiness - how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction.
Second, just because something is a consequence of my action doesn't mean it was the sole justification. If I'm writing a blog post, and I get a headache, I may take an ibuprofen. One of the consequences of my action is that I experience less pain, but this doesn't mean it was the only consequence, or even the most important reason for my decision. I do value the state of not having a headache. But I can value something for its own sake and also value it as a means to an end.
For all value to be reducible to happiness, it's not enough to show that happiness is involved in most of our decisions - it's not even enough to show that happiness is the most important consequent in all of our decisions - it must be the only consequent. That's a tough standard to meet. (I originally found this point in a Sober and Wilson paper, not sure which one.)
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I'd have to say no. I can't think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
The value of scientific discovery requires both a genuine scientific discovery, and a person to take joy in that discovery. It may seem difficult to disentangle these values, but the pills make it clearer.
I would be disturbed if people retreated into holodecks and fell in love with mindless wallpaper. I would be disturbed even if they weren't aware it was a holodeck, which is an important ethical issue if some agents can potentially transport people into holodecks and substitute zombies for their loved ones without their awareness. Again, the pills make it clearer: I'm not just concerned with my own awareness of the uncomfortable fact. I wouldn't put myself into a holodeck even if I could take a pill to forget the fact afterward. That's simply not where I'm trying to steer the future.
I value freedom: When I'm deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts. The presence or absence of an external puppet master can affect my valuation of an otherwise fixed outcome. Even if people wouldn't know they were being manipulated, it would matter to my judgment of how well humanity had done with its future. This is an important ethical issue, if you're dealing with agents powerful enough to helpfully tweak people's futures without their knowledge.
So my values are not strictly reducible to happiness: There are properties I value about the future that aren't reducible to activation levels in anyone's pleasure center; properties that are not strictly reducible to subjective states even in principle.
Which means that my decision system has a lot of terminal values, none of them strictly reducible to anything else. Art, science, love, lust, freedom, friendship...
And I'm okay with that. I value a life complicated enough to be challenging and aesthetic - not just the feeling that life is complicated, but the actual complications - so turning into a pleasure center in a vat doesn't appeal to me. It would be a waste of humanity's potential, which I value actually fulfilling, not just having the feeling that it was fulfilled.
Eliezer,
There is potentially some confusion on the term 'value' here. Happiness is not my ultimate (personal) end. I aim at other things which in turn bring me happiness and as many have said, this brings me more happiness than if I aimed at it. In this sense, it is not the sole object of (personal) value to me. However, I believe that the only thing that is good for a person (including me) is their happiness (broadly construed). In that sense, it is the only thing of (personal) value to me. These are two different senses of value.
Psychological hedonists are talking about the former sense of value: that we aim at personal happiness. You also mentioned that others ('psychological utilitarians', to coin a term) might claim that we only aim at the sum of happiness. I think both of these are false, and in fact probably no-one solely aims at these things. However, I think that the most plausible ethical theories are variants of utilitarianism (and fairly sophisticated ones at that), which imply that the only thing that makes an individual's life go well is that individual's happiness (broadly construed).
You could quite coherently think that you would fight to avoid the pill and also that if it were slipped in your drink that your life would (personally) go better. Of course the major reason not to take it is that your real scientific breakthroughs benefit others too, but I gather that we are supposed to be bracketing this (obvious) possibility for the purposes of this discussion, and questioning whether you would/should take it in the absence of any external benefits. I'm claiming that you can quite coherently think that you wouldn't take it (because that is how your psychology is set up) and yet that you should take it (because it would make your life go better). Such conflicts happen all the time.
My experience in philosophy is that it is fairly common for philosophers to expouse psychological hedonism, though I have never heard anyone argue for psychological utilitarianism. You appear to be arguing against both of these positions. There is a historical tradition of arguing for (ethical) utilitarianism. Even there, the trend is strongly against it these days and it is much more common to hear philosophers arguing that it is false. I'm not sure what you think of this position. From your comments above, it makes it look like you think it is false, but that may just be confusion about the word 'value'.
What use is a system of "morality" which doesn't move you?
Often, for me at least, when something I want to do conflicts with what I know is the right thing to do, I feel sad when I don't do the right thing. I would feel almost no remorse, if any, about not taking the pill.