Far too few people take the time to wonder what the purpose and function of happiness is.
Seeking happiness as an end in itself is usually extremely destructive. Like pain, pleasure is a method for getting us to seek out or avoid certain behaviors, and many of these behaviors had consequences whose properties could be easily understood in terms of the motivators. (Things are more complicated now that we're not living in the same world we evolved in.)
Instead of reasoning about goals, most people just produce complex systems of rationalizations to justify their desires. That's usually pretty destructive, too.
Far too few people take the time to wonder what the purpose and function of happiness is.
You're talking as if this purpose were a property of happiness itself, rather than something that we assign to it. As a matter of historical fact, the evolutionary function of happiness is quite clear. The meaning that we assign to happiness is an entirely separate issue.
Seeking happiness as an end in itself is usually extremely destructive.
Because, er, it makes people unhappy?
One often sees ethicists arguing that all desires are in principle reducible to the desire for happiness? How often? If you're talking about philosopher ethicists, in general you see them arguing against this view.
Yes, J, I very often see this. By strict coincidence, for example, I was reading this by Shermer just now, and came across:
"I believe that humans are primarily driven to seek greater happiness, but the definition of such is completely personal and cannot be dictated and should not be controlled by any group. (Even so-called selfless acts of charity can be perceived as directed toward self-fulfillment--the act of making someone else feel good, makes us feel good. This is not a falsifiable statement, but it is observable in people's actions and feelin...
Actually, seeking merely subjective happiness without any other greater purpose does often tend to make people unhappy. Or even if they manage to become somewhat happy, they will usually become even happier if they seek some other purpose as well.
One reason for this is that part of what makes people happy is their belief that they are seeking and attaining something good; so if they think they are seeking something better than happiness, they will tend to be happier than if they were seeking merely happiness.
Of course this probably wouldn't apply to a plea...
I don't think a 'joy of scientific achievement' pill is possible. One could be made that would bathe you in it for a while, but your mental tongue would probe for your accomplishment and find nothing. Maybe the pill could just turn your attention away and carry you along, but I doubt it. Vilayanur Ramachandran gave a TED talk about the cause of people's phantom limbs - your brain detects an inconsistency and gets confused and induces pain. Something similar might prevent an 'achievement' pill from having the intended effect.
I'm nervous about the word happiness because I suspect it's a label for a basket of slippery ideas and sub-idea feelings. Still, something I don't understand about your argument is that when you demonstrate that for you happiness is not a terminal value you seem to arbitrarily stop the chain of reasoning. Terminating your inquiry is not the same as having a terminal value.
If you say you value something and I know that not everyone values that thing, I naturally wonder why you value it. You say it's a terminal value, but when I ask myself why you value it i...
Eliezer, the exchange with Greg Stock reminds me strongly of Nozick's experience machine argument, and your position agrees with Nozick's conclusion.
One does, in real life, hear of drugs inducing a sense of major discovery, which disappears when the drug wears off. Sleep also has a reputation for producing false feelings of discovery. Some late-night pseudo-discovery is scribbled down, and in the morning it turns out to be nothing (if it's even legible).
I have sometimes wondered to what extent mysticism and "enlightenment" (satori) is centered around false feelings of discovery.
An ordinary, commonly experienced, non-drug-induced false feeling with seeming cognitive content is deja vu.
Eliezer, if we reduce every desire to "happiness" than haven't we just defined away the meaning of the word? I mean love and the pursuit of knowledge and watching a scary movie are all rather different experiences. To say that they are all about happiness-- well then, what wouldn't be? If everything is about happiness, then happiness doesn't signify anything of meaning, does it?
James, are you purposefully parodying the materialist philosophy based on the disproved Newtonian physics?
Constant-- deja vu is not always necessarily contentless. See the work of Ian Stevenson. Mystical experiences are not necessarily centered around anything false-- see "The Spiritual Brain", by Beauregard (the neuroscientist who has studied these phenomena more than any other researcher.)
Eliezer,
There is potentially some confusion on the term 'value' here. Happiness is not my ultimate (personal) end. I aim at other things which in turn bring me happiness and as many have said, this brings me more happiness than if I aimed at it. In this sense, it is not the sole object of (personal) value to me. However, I believe that the only thing that is good for a person (including me) is their happiness (broadly construed). In that sense, it is the only thing of (personal) value to me. These are two different senses of value.
Psychological hedonists a...
If I admitted that I found the idea of being a "wirehead" very appealing, would you think less of me?
So how about anti depressants (think SSRI à la Prozac)? They might not be Huxley's soma or quite as convincing as the pill described in the post, but still, they do simulate something that may be considered happiness. And I'm told it also works for people who aren't depressed. Or for that matter, a whole lot of other drugs such as MDMA.
Thinking about it, "simulate" is entirely the wrong word, really. If they really work, they do achieve something along the lines of happiness and do not just simulate it. Sorry about the doublepost.
Toby, I think you should probably have mentioned Derek Parfit as a reference when stating that "I'm claiming that you can quite coherently think that you wouldn't take it (because that is how your psychology is set up) and yet that you should take it (because it would make your life go better). Such conflicts happen all the time.", as the claim needs substantialy background to be obvious, but as I'm mentioning him here you don't need to any more.
Robin Hanson seems to take the simulation argument seriously. If it is the case that our reality is simulated, then aren't we already in a holodeck? So then what's so bad about going from this holodeck to another?
I agree with Eliezer here. Not all values can be reduced to desire for happiness. For some of us, the desire not to be wireheaded or drugged into happiness is at least as strong as the desire for happiness. This shouldn't be a surprise since there were and still are pyschoactive substances in our environment of evolutionary adaptation.
I think we also have a more general mechanism of aversion towards triviality, where any terminal value that becomes "too easy" loses its value (psychologically, not just over evolutionary time). I'm guessing this is...
So then what's so bad about going from this holodeck to another?
The idea that this whole universe including us is simulated is that we ourselves are part of the simulation. Since we are and we know we are conscious, then we know that the simulated beings can be (and very likely are) conscious if they seem so. If they are, then they are "real" in an important sense, maybe the most important sense. They are not mere mindless wallpaper.
I think in order to make the simulation argument work, the simulation needs to be unreal, the inhabitants other tha...
I fail to understand how the "mindless wallpaper" of the next level of simulation must be "unreal" while our simulated selves "are and we know we are conscious". They cannot be unreal merely because they are simulations because in the thought-experiment we ourselves are simulations but, according to you, still real.
I fail to understand how the "mindless wallpaper" of the next level of simulation must be "unreal" while our simulated selves "are and we know we are conscious". They cannot be unreal merely because they are simulations because in the thought-experiment we ourselves are simulations but, according to you, still real.
No, you completely misunderstood what I said. I did not say that the "mindless wallpaper" (scare quotes) of the next level must be unreal. I said that in order for the philosophical thought experiment to m...
TGGP, the presumption is that the sex partners in this simulation have behaviors driven by a different algorithm, not software based on the human mind, software which is not conscious but is nonetheless capable of fooling a real person embedded in the simulation. Like a very advanced chatbot.
"Simulation" is a silly term. Whatever is, is real.
""Simulation" is a silly term. Whatever is, is real."
This is true, but "simulation" is still a useful word; it's used to refer to a subset of reality which attempts to resemble the whole thing (or a subset of it), but is not causally closed. "Reality", as we use the word, refers to the whole big mess which is causally closed.
Wei, yes my comment was less clear than I was hoping. I was talking about the distinction between 'psychological hedonism' and 'hedonism' and I also mentioned the many person versions of these theories ('psychological utilitarianism' and 'utilitarianism'). Lets forget about the many person versions for the moment and just look at the simple theories.
Hedonism is the theory that the only thing good for each individual is his or her happiness. If you have two worlds, A and B and the happiness for Mary is higher in world A, then world A is better for Mary. Thi...
Toby, what are your grounds for thinking that (ethical) hedonism is true, other than that happiness appears to be something that almost everyone wants? Is it something you just find so obvious you can't question it, or are there reasons that you can describe? (The obvious reason seems to me to be "We can produce something that's at least roughly right this way, and it's nice and simple". Something along those lines?)
g, you have suggested a few of my reasons. I have thought quite a lot about this and could write many pages, but I will just give an outline here.
(1) Almost everything we want (for ourselves) increases our happiness. Many of these things evidently have no intrinsic value themselves (such as Eliezer's Ice-cream case). We often think we want them intrinsically, but on closer inspection, if we really ask whether we would want them if they didn't make us happy we find the answer is 'no'. Some people think that certain things resist this argument by having some...
Toby, how do you get around the problem that the greatest sum of happiness across all lifes probably involves turning everyone into wireheads and putting them in vats? Or in an even more extreme scenario, turning the universe into computers that all do nothing but repeatedly runs a program that simulates a person in an ultimate state of happiness. Assuming that we have access to limited resources, these methods seem to maximize happiness for a given amount of resources.
I'm sure you agree that this is not something we do want. Do you think that it is something we should want, or that the greatest sum of happiness across all lifes can be achieved in some other way?
In a slogan, one wants to be both happy and worthy of happiness. (One needn't incorporate Kant's own criteria of worthiness to find his formulation useful.)
Drake, what do you mean by worthy of happiness. How does that formulation differ, for example, from my desire to both be happy and continue to exist as myself? (It seems to me like the latter desire also explains the pro-happiness anti-blissing-out attitude.)
I value many things intrinsically! This may make me happy or not, but I don´t rely on the feelings of possible happiness when I make decisions. I see intrinsic value in happiness itself, but also as a means for other values, such as art, science, beauty, complexity, truth etc, wich I often value even more than hapiness. But sentient life may be the highest value. Why would we accept happiness as our highest terminal value when it is just a way to make living organisms do certain things. Ofcourse it feels good and is important, but it is still rather arbita...
According to the theory of evolution, organisms can be expected to have approximately one terminal value - which is - very roughly speaking - making copies of their genomes. There /is/ intragenomic conflict, of course, but that's a bit of a detail in this context.
Organisms that deviate very much from this tend to be irrational, malfunctioning or broken.
The idea that there are some values not reducible to happiness does not prove that there are "a lot of terminal values".
Happiness was never God's utitily function in the first place. Happiness is just a carrot.
It seems like a vague reply - since the supposed misconception is not specified.
The "Evolutionary Psychology" post makes the point that values reside in brains, while evolutionary causes lie in ancestors. So, supposedly, if I attribute goals to a petunia, I am making a category error.
This argument is very literal-minded. When biologists talk about plants having the goal of spreading their seed about, it's intended as shorthand. Sure they /could/ say that the plant's ancestors exhibitied differential reproductive success in seed distribution, a...
Happiness is just a carrot.
And reproductive fitness is just a way to add intelligent agents to a dumb universe that begin with a big bang. Now that the intelligent agents are here, I suspect the universe no longer needs reproductive fitness.
Tim, if you understand that the "values" of evolution qua optimization process are not the values of the organisms it produces, what was the point of your 12:20 PM comment of March 6? "Terminal values" in the post refers to the terminal values of organisms. It is, as Eliezer points out, an empirical fact that people don't consciously seek to maximize fitness or any one simple value. Sure, that makes us "irrational, malfunctioning or broken" by the metaphorical standards of some metaphorical personification of evolution, but I should think that's rather besides the point.
Brains are built by genes. Those brains that reflect the optimisation target of the genes are the ones that will become ancestors. So it is reasonable - on grounds of basic evolutionary biology - to expect that human brains will generate behaviour resulting in the production of babies - thus reflecting the target of the optimisation process that constructed them.
In point of fact, human brains /do/ seem to be pretty good at making babies. The vast majority of their actions can be explained on these grounds.
That is not to say that people will necessarily con...
Is maximizing your expected reproductive fitness your primary goal in life, Tim?
When you see others maximizing their expected reproductive fitness, does that make you happy? Do you approve? Do you try to help them when you can?
More details of my views on the subject can be found here.
With the rise of "open source biology" in the coming decades, you'll probably be able to sequence your own non-coding DNA and create a pack of customized cockroaches. Here are your Nietzschean uebermensch: they'll share approx. 98% of your genome and do a fine job of maximizing your reproductive fitness.
Customized cockroaches are far from optimal for Tim because Tim understands that the most powerful tool for maximizing reproductive fitness is a human-like consciousness. "Consciousness" is Tim's term; I would have used John Stewart's term, "skill at mental modelling." Thanks for the comprehensive answer to my question, Tim!
Re: genetic immortality via customized cockroaches:
Junk DNA isn't immortal. It is overwritten by mutations, LINEs and SINEs, etc. In a geological eyeblink, the useless chromosomes would be simply deleted - rendering the proposal ineffective.
Sam Harris expands on his view of morality in his recent book The Moral Landscape, but it hardly addresses this question at all. I attended a talk he gave on the book and when an audience member asked whether it would be moral to just give everyone cocaine or some sort of pure happiness drug, Harris basically said "maybe."
In the agonizing process of reading all the Yudkowsky Less Wrong articles, this is the first one I have had any disagreement with whatsoever.
This is coming from a person who was actually convinced by the biased and obsolete 1997 singularity essay by Yudkowsky.
Only, it's not so much a disagreement as it is a value differential. I don't care the processes by which one achieves happiness. The end results are what matter, and I'll be damned if I accept having one less hedon or one less utilon out there because of a perceived value in working toward them rather...
Since you're differentiating utilons from hedons, doesn't that kind of follow the thrust of the article? That is, the point that the OP is arguing against is that utilons are ultimately the same thing as hedons; that all people really want is to be happy and that everything else is an instrumental value towards that end.
Your example of the perfect anti-depressant is I think somewhat misleading; the worry when it comes to wire-heading is that you'll maximize hedons to the exclusion of all other types of utilon. Curing depression is awesome not only because it increases net hedons, but also because depression makes it hard to accomplish anything at all, even stuff that's about whole other types of utilons.
this would make it difficult to explain how we could care about anyone else's happiness - how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction
And why should we actually treat people as "ends in themselves"? What's bad about treating everything except one's own happiness as instrumental?
Taking it a bit further from a pill: if we could trust AI to put whole of the humanity into matrix like state, and keep the humanity alive in that state longer than humanity itself could survive living in real world, while running a simulation of life with maximum happiness in each brain until it ran out of energy, would you advocate it? I know I would, and I don't really see any reason not to.
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I’d have to say no. I can’t think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
I'm commenting to register disagreement. I was really s
...Sounds a lot like splitting hairs since each consequence you list still has the same outcome, pleasure/happiness. So why not skip over it all?
Funny, the other day I was thinking of this, but from the other side: What if we've already taken the pill?
Imagine Morpheus comes to you and reveals that the world we live in is fake, and most of the new science is simulated to make it more fun. Real mechanics is just a polished version of Newton's (pretty much Lagrangian/Hamiltonian mechanics). There is no such thing as a speed limit in the universe. Instantaneous travel to every point of the universe is possible, and has already been done. No aliens either (not that it would be impossible, we just happen...
I think we all strive to benefit. Happiness is just one of the possible components of benefit. There are other components, for example, knowledge of the truth.
"I value freedom: When I'm deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts."
I am somewhat the same but must recognise that it is possible that, were I to be forced into pure bliss, I would not want to go back. My value set may shift or reveal itself to not be what I thought it was. (I think it is possible, maybe even normal, to be somewhat mistaken about which values one lives by.) In fact, it seems exceedingly plausible to m...
I would suggest to consider the more abstract concept of "well-being", which contains both happiness and freedom. That's the steel-manned form of the consequentialist`s moral cornerstone.
When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery. I approached him after his talk and said, "I agree that such pills are probably possible, but I wouldn't voluntarily take them."
And Stock said, "But they'll be so much better that the real thing won't be able to compete. It will just be way more fun for you to take the pills than to do all the actual scientific work."
And I said, "I agree that's possible, so I'll make sure never to take them."
Stock seemed genuinely surprised by my attitude, which genuinely surprised me.
One often sees ethicists arguing as if all human desires are reducible, in principle, to the desire for ourselves and others to be happy. (In particular, Sam Harris does this in The End of Faith, which I just finished perusing - though Harris's reduction is more of a drive-by shooting than a major topic of discussion.)
This isn't the same as arguing whether all happinesses can be measured on a common utility scale - different happinesses might occupy different scales, or be otherwise non-convertible. And it's not the same as arguing that it's theoretically impossible to value anything other than your own psychological states, because it's still permissible to care whether other people are happy.
The question, rather, is whether we should care about the things that make us happy, apart from any happiness they bring.
We can easily list many cases of moralists going astray by caring about things besides happiness. The various states and countries that still outlaw oral sex make a good example; these legislators would have been better off if they'd said, "Hey, whatever turns you on." But this doesn't show that all values are reducible to happiness; it just argues that in this particular case it was an ethical mistake to focus on anything else.
It is an undeniable fact that we tend to do things that make us happy, but this doesn't mean we should regard the happiness as the only reason for so acting. First, this would make it difficult to explain how we could care about anyone else's happiness - how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction.
Second, just because something is a consequence of my action doesn't mean it was the sole justification. If I'm writing a blog post, and I get a headache, I may take an ibuprofen. One of the consequences of my action is that I experience less pain, but this doesn't mean it was the only consequence, or even the most important reason for my decision. I do value the state of not having a headache. But I can value something for its own sake and also value it as a means to an end.
For all value to be reducible to happiness, it's not enough to show that happiness is involved in most of our decisions - it's not even enough to show that happiness is the most important consequent in all of our decisions - it must be the only consequent. That's a tough standard to meet. (I originally found this point in a Sober and Wilson paper, not sure which one.)
If I claim to value art for its own sake, then would I value art that no one ever saw? A screensaver running in a closed room, producing beautiful pictures that no one ever saw? I'd have to say no. I can't think of any completely lifeless object that I would value as an end, not just a means. That would be like valuing ice cream as an end in itself, apart from anyone eating it. Everything I value, that I can think of, involves people and their experiences somewhere along the line.
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
The value of scientific discovery requires both a genuine scientific discovery, and a person to take joy in that discovery. It may seem difficult to disentangle these values, but the pills make it clearer.
I would be disturbed if people retreated into holodecks and fell in love with mindless wallpaper. I would be disturbed even if they weren't aware it was a holodeck, which is an important ethical issue if some agents can potentially transport people into holodecks and substitute zombies for their loved ones without their awareness. Again, the pills make it clearer: I'm not just concerned with my own awareness of the uncomfortable fact. I wouldn't put myself into a holodeck even if I could take a pill to forget the fact afterward. That's simply not where I'm trying to steer the future.
I value freedom: When I'm deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts. The presence or absence of an external puppet master can affect my valuation of an otherwise fixed outcome. Even if people wouldn't know they were being manipulated, it would matter to my judgment of how well humanity had done with its future. This is an important ethical issue, if you're dealing with agents powerful enough to helpfully tweak people's futures without their knowledge.
So my values are not strictly reducible to happiness: There are properties I value about the future that aren't reducible to activation levels in anyone's pleasure center; properties that are not strictly reducible to subjective states even in principle.
Which means that my decision system has a lot of terminal values, none of them strictly reducible to anything else. Art, science, love, lust, freedom, friendship...
And I'm okay with that. I value a life complicated enough to be challenging and aesthetic - not just the feeling that life is complicated, but the actual complications - so turning into a pleasure center in a vat doesn't appeal to me. It would be a waste of humanity's potential, which I value actually fulfilling, not just having the feeling that it was fulfilled.