Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Are wireheads happy?

105 Post author: Yvain 01 January 2010 04:41PM

Related to: Utilons vs. Hedons, Would Your Real Preferences Please Stand Up

And I don't mean that question in the semantic "but what is happiness?" sense, or in the deep philosophical "but can anyone not facing struggle and adversity truly be happy?" sense. I mean it in the totally literal sense. Are wireheads having fun?

They look like they are. People and animals connected to wireheading devices get upset when the wireheading is taken away and will do anything to get it back. And it's electricity shot directly into the reward center of the brain. What's not to like?

Only now neuroscientists are starting to recognize a difference between "reward" and "pleasure", or call it "wanting" and "liking". The two are usually closely correlated. You want something, you get it, then you feel happy. The simple principle behind our entire consumer culture. But do neuroscience and our own experience really support that?

It would be too easy to point out times when people want things, get them, and then later realize they weren't so great. That could be a simple case of misunderstanding the object's true utility. What about wanting something, getting it, realizing it's not so great, and then wanting it just as much the next day? Or what about not wanting something, getting it, realizing it makes you very happy, and then continuing not to want it?

The first category, "things you do even though you don't like them very much" sounds like many drug addictions. Smokers may enjoy smoking, and they may want to avoid the physiological signs of withdrawl, but neither of those is enough to explain their reluctance to quit smoking. I don't smoke, but I made the mistake of starting a can of Pringles yesterday. If you asked me my favorite food, there are dozens of things I would say before "Pringles". Right now, and for the vast majority of my life, I feel no desire to go and get Pringles. But once I've had that first chip, my motivation for a second chip goes through the roof, without my subjective assessment of how tasty Pringles are changing one bit.

Think of the second category as "things you procrastinate even though you like them." I used to think procrastination applied only to things you disliked but did anyway. Then I tried to write a novel. I loved writing. Every second I was writing, I was thinking "This is so much fun". And I never got past the second chapter, because I just couldn't motivate myself to sit down and start writing. Other things in this category for me: going on long walks, doing yoga, reading fiction. I can know with near certainty that I will be happier doing X than Y, and still go and do Y.

Neuroscience provides some basis for this. A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it). When they knocked out the "liking" system, the rats would eat exactly as much of the food without making any of the satisifed lip-licking expression, and areas of the brain thought to be correlated with pleasure wouldn't show up in the MRI. Knock out "wanting", and the rats seem to enjoy the food as much when they get it but not be especially motivated to seek it out. To quote the science1:

Pleasure and desire circuitry have intimately connected but distinguishable neural substrates. Some investigators believe that the role of the mesolimbic dopamine system is not primarily to encode pleasure, but "wanting" i.e. incentive-motivation. On this analysis, endomorphins and enkephalins - which activate mu and delta opioid receptors most especially in the ventral pallidum - are most directly implicated in pleasure itself. Mesolimbic dopamine, signalling to the ventral pallidum, mediates desire. Thus "dopamine overdrive", whether natural or drug-induced, promotes a sense of urgency and a motivation to engage with the world, whereas direct activation of mu opioid receptors in the ventral pallidum induces emotionally self-sufficient bliss.

The wanting system is activated by dopamine, and the liking system is activated by opioids. There are enough connections between them that there's a big correlation in their activity, but the correlation isn't one and in fact activation of the opioids is less common than the dopamine. Another quote:

It's relatively hard for a brain to generate pleasure, because it needs to activate different opioid sites together to make you like something more. It's easier to activate desire, because a brain has several 'wanting' pathways available for the task. Sometimes a brain will like the rewards it wants. But other times it just wants them.

So you could go through all that trouble to find a black market brain surgeon who'll wirehead you, and you'll end up not even being happy. You'll just really really want to keep the wirehead circuit running.

Problem: large chunks of philosophy and economics are based upon wanting and liking being the same thing.

By definition, if you choose X over Y, then X is a higher utility option than Y. That means utility represents wanting and not liking. But good utilitarians (and, presumably, artificial intelligences) try to maximize utility (or do they?). This correlates contingently with maximizing happiness, but not necessarily. In a worst-case scenario, it might not correlate at all - two possible such scenarios being wireheading and an AI without the appropriate common sense.

Thus the deep and heavy ramifications. A more down-to-earth example came to mind when I was reading something by Steven Landsburg recently (not recommended). I don't have the exact quote, but it was something along the lines of:

According to a recent poll, two out of three New Yorkers say that, given the choice, they would rather live somewhere else. But all of them have the choice, and none of them live anywhere else. A proper summary of the results of this poll would be: two out of three New Yorkers lie on polls.

This summarizes a common strain of thought in economics, the idea of "revealed preferences". People tend to say they like a lot of things, like family or the environment or a friendly workplace. Many of the same people who say these things then go and ignore their families, pollute, and take high-paying but stressful jobs. The traditional economic explanation is that the people's actions reveal their true preferences, and that all the talk about caring about family and the environment is just stuff people say to look good and gain status. If a person works hard to get lots of money, spends it on an iPhone, and doesn't have time for their family, the economist will say that this proves that they value iPhones more than their family, no matter what they may say to the contrary.

The difference between enjoyment and motivation provides an argument that could rescue these people. It may be that a person really does enjoy spending time with their family more than they enjoy their iPhone, but they're more motivated to work and buy iPhones than they are to spend time with their family. If this were true, people's introspective beliefs and public statements about their values would be true as far as it goes, and their tendency to work overtime for an iPhone would be as much a "hijacking" of their "true preferences" as a revelation of them. This accords better with my introspective experience, with happiness research, and with common sense than the alternative.

Not that the two explanations are necessarily entirely contradictory. One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.

Go too far toward the liking direction, and you risk something different from wireheading only in that the probe is stuck in a different part of the brain. Go too far in the wanting direction, and you risk people getting lots of shiny stuff they thought they wanted but don't actually enjoy. So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?

Sources/Further Reading:

1. Wireheading.com, especially on a particular University of Michigan study

2. New York Times: A Molecule of Motivation, Dopamine Excels at its Task

3. Slate: The Powerful and Mysterious Brain Circuitry...

4. Related journal articles (1, 2, 3)

Comments (90)

Comment author: adamzerner 04 May 2014 03:52:44PM 2 points [-]

The big point I took away from this article is that wanting and liking are different, and thus we should be skeptical of "revealed preferences".

But the title seemed to imply that the article wanted to address the question of whether or not we should wirehead. The last paragraph seems to argue that we should be really careful with wireheading, because we could get it wrong and not really know that we got it wrong.

Go too far toward the liking direction, and you risk something different from wireheading only in that the probe is stuck in a different part of the brain. Go too far in the wanting direction, and you risk people getting lots of shiny stuff they thought they wanted but don't actually enjoy. So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?

I agree with this, but given that it's a central argument of the article, I think it could use a longer explanation.

Comment author: adamzerner 04 May 2014 03:45:57PM 2 points [-]

I'm a neuroscience major and have known about the different circuits for liking vs. wanting. And it's always been a belief of mine that people's revealed preferences are often times just wrong, and that this is huge problem with our economy. But somehow I never connected this to the liking/wanting circuits being different. Thanks!

Comment author: adamzerner 04 May 2014 03:37:37PM 1 point [-]

they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it

Could you give a quick summary of this rationale?

Comment author: Xaos 03 April 2013 06:49:39PM 0 points [-]

"One could come up with a story about how people are motivated to act selfishly but enjoy acting morally..."

Actually, I think a lot of stories are like that.

Because "CONFLICT IS DRAMA!!!!!1!!!one!!!!", a whole lot of stories I've been reading involve the characters having an arc that goes like this:

-Problem occurs, everyone has different ideas about how to solve it. -Ignored dissenting character, perhaps with prodding by certain outside forces, blows up, acts like a jerk, and storms off. -Dissenting character realizes that regardless of how much better their own plan was, that they've let a lot of relatively small and meaningless things drive a wedge between themselves and their friends, right when their friends needed them to be there and "Oh I was a fool!" and blah blah blah kiss and make up, Friendship is Magic.

Unless its a zombie apocolypse story, in which case the character in question NEVER stop fighting until they die, or at least they stop and reveal "they were a good person all along who made bad decisions" about ten mintues or so before the zombies eat them.

Comment author: mwengler 26 June 2012 09:34:36PM 1 point [-]

So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?

Somehow, trying to figure out the policy an FAI or paternalist government should have by examining our addictive reactions strikes me as like doing transportation planning by looking at people's reactions to red, yellow, and green lights. Not that people's reactions to these things are irrelevant to traffic planning, but rather that figuring out where people are trying to go is even more important than figuring out their reactions to traffic signals.

The post talks about signals in the brain that motivate us powerfully. Almost certainly, evolution favored these signalling mechanisms because of where it tended to take us rather than how it tended to direct us there.

Maybe an FAI rather than figuring how to make us feel like we are satisfied or happy would instead figure out how to evoke these responses when we were doing something good for us and how to get negative responses when we were doing something bad for us. Maybe an FAI would rather re-wire us rather than wirehead us.

How do we stop our CEV from bringing us all beyond human, and should we even want to?

Comment author: mwengler 26 June 2012 08:41:27PM 0 points [-]

For research on human happiness that really does a great job of presenting non-intuitive results in a compelling fashion, I recommend Daniel Gilbert's Stumbling on Happiness.

For a great book in a lot of ways, I recommend Robert Frank's "Darwin Economy." I read this because in his interview on Russ Robert's "Econtalk" podcast, Robert Frank made the claim that 100 years from now Darwin will be recognized as the greatest economist.

Some of their points relevant to wireheading: happiness seems mostly relative, relative to where you were recently. In engineering/physics terms, I think of it as there is no DC term, if you attach to something pleasure producing (whether wirehead, or just plain head) it is delicious, and delicious for a while, but even if you don't stay attached long enough to have it just be OK, its deliciousness declines back towards the middle.

Comment author: AGirlAlone 10 February 2012 09:15:44AM 0 points [-]

I wonder. I grew up with experience in multiple systems of meditation, and found a way that works for me. Without electrodes or drugs or Nobel Prizes, I can choose to feel happy and relaxed and whatever. When I think about it, meditation can feel more pleasing and satisfying than every other experience in my life. Yet (luckily?) I do not feel any compulsion to do that in place of many other things, or try to advocate it. This is not because of willpower. While it lasts I like and want it, as if there's fulfillment of purpose, and when it's over I cannot recall that feeling faithfully enough to desire it more than I desire chocolate. Also, I cannot very reliably reproduce the feeling - it occurs only some of the time I try, cannot be had too frequently(no idea why) and cannot be consciously prolonged. So I consider it a positive addition to my life, especially helpful in yanking me out of episodes of gloom.

This of course raises multiple questions. There's such thing as ambient mood as opposed to current momentary pleasure, and if a person is pissed off too often to concentrate productively, would improving the mood be the right choice, especially if it has an upper bound and doesn't lead to the person madly pressing the button indefinitely? Hell, if there's any way to make people happier with no other change, without causing crippling obsession - maybe there's such a quirk in the brain (with want and pleasure detached from each other) to be exploited safely with meditation, maybe the button is in responsible hands, would it be acceptable? Though, the meditation sometimes make me wonder if the mind can directly change the world (I changed my emotional reality, and it felt real). Is impaired rationality an acceptable price then?

Comment author: [deleted] 19 November 2011 09:24:31PM 2 points [-]

My grandma always assumes that if I don't want to have [some kind of food] right now, that means I don't like it.

Comment author: Uni 28 June 2011 05:43:00AM *  0 points [-]

So which form of good should altruists, governments, FAIs, and other agencies in the helping people business respect?

Governments should give people what people say they want, rather than giving people what the governments think will make people happier, whenever they can't do both. But this is not because it's intrinsically better for people to get what they want than to get what makes them happier (it isn't), it's because people will resent what they percieve as paternalism in governments and because they won't pay taxes and obey laws in general if they resent their governments. Without taxes and law-abiding citizens, there will not be much happiness in the long run. So, simply for the sake of happiness maximizing, governments should (except, possibly, in some very, very extreme situations) just do what people want.

It's understandable that people want others to respect what they want, rather than wanting others to try to make them happier: even if we are not all experts ourselves on what will make us happier (not all people know about happiness research), we may need to make our own mistakes in order to really come to trust that what people say works works and that what people say doesn't work doesn't work. Also, some of government's alleged benevolent paternalism "for people's own good" (for example Orwellian surveillance in the name of the "war on terror") may even be part of a plan to enslave or otherwise exploit the people. We may know these things subconsciously, and that may explain why some of us are so reluctant to conclude that what we want has no intrinsic value and that pleasure is the only thing that has intrinsical value. The instrumental value of letting people have what they want (rather than paternalistically giving them what some government thinks they need) is so huge, that saying it has "mere" instrumental value feels like neglecting how huge a value it has. However, it doesn't really have intrinsic value, it just feels that way, because we are not accostumed to thinking that something that has only instrumental value can have such a huge instrumental value.

For example, freedom of speech is of huge importance, but not primarily because people want it, but primarily because it provides happiness and prevents too much suffering from happening. If it were the case that freedom of speech didn't provide any happiness and didn't prevent any suffering, but people still eagerly wanted it, there would be no point in letting anybody have freedom of speech. However, this would imply either that being denied freedom of speech in no way caused any form of suffering in people, or that, if it caused suffering, then getting freedom of speech wouldn't relieve any of that suffering. That is a hypothetical scenario so hard to imagine that I think the fact that it is so hard to imagine is the reason why people have difficulties accepting the truth that freedom of speech has merely instrumental value.

Comment author: timtyler 26 January 2011 01:26:22AM *  1 point [-]

Problem: large chunks of philosophy and economics are based upon wanting and liking being the same thing.

I don't think that is true.

"Wanting" maps onto expected utility; "liking" is the reward signal - the actual utility.

That framing surely makes it seem like pretty standard economics.

There are some minor footnotes about how the reward signal can sometimes be self-generated - e.g. when you know you should have got the reward, but were just unlucky.

Comment author: nazgulnarsil 25 January 2011 08:55:42PM 2 points [-]

as a person who plans to wirehead themselves if other positive futures don't work out I find this very interesting but unconvincing.

Comment author: [deleted] 14 November 2010 08:11:05PM 17 points [-]

I've seen a fair amount of happiness research, and happiness tends towards the "liking" end of the scale. What makes people happy is giving to charity, meditating, long walks, and so on; what makes people unhappy is commuting, work stress, and child-rearing. Religion, old age, and living in Utah also make people happy.

A life designed to maximize happiness, according to happiness researchers, would not be a hedonistic orgy, as one might imagine. You are actually happier with a fair degree of self-restraint. But it would have a lot more peaceful hobbies and fewer grand, stressful goals (like strenuous careers and parenthood.) To me, the happiness-optimized life does not sound fun. It is not something I would look forward to with anticipation and eagerness. Statistically speaking, we'd like such a life, but we wouldn't want it. Myself, I'd rather be given what I want than what would make me happy.

Comment author: diegocaleiro 06 January 2011 01:58:08AM 7 points [-]

Wow, this is unexpected in so many levels for me. You have access to happiness research yet you would stick to what you want instead. I don't mean to insult or think there is anything wrong, I'm just genuinely staggered at the fact.

I have read some thousands of pgs in happiness research, and started to follow advice. I'm more generous, I take long walks, I cherish friendships, I care very little for a long career, I go to evolutionary envinroments all the time (the park, swimming pools and beaches) I pursue objectives which really ought to make me say "I was doing something I consider important" and ignore money, having children, and some parts of familial obligations.

We had the same info, and we took such different paths...... this is awesome.

So I suppose I am much happier but am in a constant struggle not to want lots of things that I naturally would. So I'm in a kind of strenuous effort of self-control leading to constant bliss. I suppose that you are less happier (though probably not in any way perceivable from a first person perspective) but way more relaxed, prone to be guided by your desires and wishes, and willing to actually go there and do that thing you feel like doing.....

I wish I was you for two weeks or something, if only that were possible, and then I came back....

Comment author: ramanspectre 17 January 2012 03:44:13AM 1 point [-]

"I suppose that you are less happier (though probably not in any way perceivable from a first person perspective) but way more relaxed, prone to be guided by your desires and wishes, and willing to actually go there and do that thing you feel like doing....."

What makes you think that the person you are responding to is more relaxed? You'd think that constantly pursuing wants would make them less relaxed since it takes a lot of energy to pursue worldly things.

And, what you think that you aren't relaxed?

Comment author: NancyLebovitz 15 November 2010 12:31:58AM 0 points [-]

The other aspect is that the low-intensity hedonic life might suit a majority or a plurality of people, but not optimize happiness for a large minority.

Comment author: Jack 14 November 2010 08:19:59PM 2 points [-]

Living in Utah does not make people happy.

</causality police>

Comment author: [deleted] 14 November 2010 08:27:07PM 1 point [-]

Sure. But if I wanted to live in the best place to make me happy, and all I knew was the happiness distribution by geographic location, it would be dumb to choose to live somewhere other than the happiest place, right?

Comment author: ata 14 November 2010 08:28:10PM 4 points [-]

Yes, but happiness distribution by geographic location isn't all you know.

Comment author: diegocaleiro 06 January 2011 02:00:26AM *  2 points [-]

Also it is not relevant, since happiness varies infinitely more due to other circumstances. 50% unchangeable genes 40% how you deal with the lemons life give you and the strawberries as well. 10% all your life conditions, from marriage, to children, to how rich you are. A tiny tiny bit of those 10% is determined by where you live. (2008 Lyubuomirsky)

Comment author: Dean 03 February 2010 08:01:04AM 2 points [-]

I was reading a free on line book "The Authoritarians" by Robert Altemeyer one of his many findings of studying fundamentalists authoritarian follower types is that many deal with the guilt of doing some thing morally wrong by asking God for forgiveness and then feel closer to "Much less guilty" then "Appreciably less guilty" it may not be a wire in the head but it should keep one from some suffering.

I also very much like Dan Gilbert's Ted talk on synthesizing happiness I use it all the time because "it really is not so bad" and "it turned out for the best"

Comment author: AndrewH 25 January 2010 08:05:37PM 3 points [-]

I don't smoke, but I made the mistake of starting a can of Pringles yesterday. If you >asked me my favorite food, there are dozens of things I would say before "Pringles". >Right now, and for the vast majority of my life, I feel no desire to go and get Pringles. >But once I've had that first chip, my motivation for a second chip goes through the >roof, without my subjective assessment of how tasty Pringles are changing one bit.

What is missing from this is the effort (which eats up the limited willpower budget) required to get the second Pringle chip. Your motivation for a second Pringle chip would be much lower if you only brought one bag of Pringle chips, and all bags contained one chip. However, your motivation to have another classof(Pringle) = potato chip no doubt rises -- due to the fact that chips are on your thoughts rather than iPhones.

Talking about effort allows us to bring in habits into the discussion, which you might define as sets of actions that, due to their frequent use, are much less effort to perform.

The difference between enjoyment and motivation provides an argument that could >rescue these people. It may be that a person really does enjoy spending time with >their family more than they enjoy their iPhone, but they're more motivated to work >and buy iPhones than they are to spend time with their family.

Alternatively, for potentially good reasons before (working hard to buy a house for said family), work has become habitual while spending time with the family has not. Hence, work is the default set of actions, the default no-effort state, and anything that takes time off work requires effort. Spending time with the family could do this, yet buying an iPhone with the tons of money this person has would not.

A way of summarizing the effect of effort is that it is a function of a particular persons set of no-effort (no willpower) actions. This function defines how much 'wanting' is required to do that action -- less effort actions of the same amount of 'wanting' are more 'desirable' to be done.

Willpower plays a big role in this in that you can spend willpower to pull yourself out of the default state (a default state such as being in New York), but it only last so long.

Comment author: giles_english 12 January 2010 09:35:55AM 0 points [-]

Does this liking/enjoying dichotomy explain the various flavours of erotic masochism?

Comment author: ciphergoth 12 January 2010 10:11:16AM *  2 points [-]

Speaking from my own experience, I worry a lot that I'm often drawn to play computer games for reasons other than enjoying them. I never worry this about SM.

Comment author: pdf23ds 12 January 2010 09:48:03AM *  1 point [-]

I think you mean the "enjoyment/motivation dichotomy" or the "wanting/liking dichotomy". Many sexual masochists report a sort of euphoria and dissociation that begins to arise after a certain amount of pain, which would seem to be a positive liking. Many also report a positive thrill from humiliation that might be considered to overwhelm the negative parts of the experience. (I think pretty much all report that if the same things were to happen in a non-role-playing situation, they would not like or enjoy them. But it's hard to test that.)

Comment author: clockbackward 10 January 2010 05:45:29PM 4 points [-]

Perhaps it is true that our modest technology for altering brain states (simple wireheading, recreational drugs, magnetic stimulation, etc.) leads only to stimulation of the "wanting" centers of the brain and to simple (though at times intense) pleasurable sensations. On the other hand though, it seems almost inevitable that as the secrets of the brain are progressively unlocked, and as our ability to manipulate the brain grows, it will be possible to generate all sorts of brain states, including those "higher" ones associated with love, accomplishment, fulfillment, joy, religious experiences, insight, bliss, tranquility and so on. Hence, while your analysis appears to be quite relevant with regard to wireheading today, I am skeptical that it is likely to apply much to the brain technology that could exist 50 years from now.

Comment author: Peter_Twieg 08 January 2010 01:38:45AM 0 points [-]

I realize that I'm late to the game on this post, but I have to say that as economist, I found the take home point about revealed preference to be quite interesting, and it makes me wonder about the extent to which further neuroscience research will find systematic disjunctions in everyday circumstances between what motivates us and what gives us pleasure. Undoubtedly this would be leveraged into new sorts of paternalistic arguments... I'm guessing we'll need another decade or two before we have the neuropaternalist's equivalent of Nudge, however.

Comment author: Sebastian_Hagen 04 January 2010 03:18:14PM *  9 points [-]

The first category, "things you do even though you don't like them very much" sounds like many drug addictions.

It's not limited to drugs or even similar physical stimuli like tasty food; according to my personal experience you can get the same effect with computer games. There's games that can be plenty of fun in the beginning (while you're learning what works), but stop being so once you abstract from that to a simple set of rules by which you can (usually) win, but nevertheless stay quite addictive in the latter phase. Whenever I play Dungeon Crawl Stone Soup for more than a few hours, I inevitably end up at a point where I don't even need to verbally think about what I'm doing for 95%+ of the wall-clock time spent playing, but that doesn't make it much easier to quit.

Popular vocabulary suggests that this is a fairly common effect.

Comment author: k3nt 05 January 2010 05:21:48AM 4 points [-]

Agree 100%. I just played a flash game last night and then again this morning, because I "just wanted to finish it." The challenge was gone, I had it all figured out, and there was nothing left but the mopping up ... which took three hours of my life. At the end of it, I told myself, "Well, that was a waste of time." But I was also glad to have completed the task.

It's probably a very good thing that I've never tried any drug stronger than alcohol.

Comment author: Utilitarian 04 January 2010 12:01:31AM *  4 points [-]

Great post! I completely agree with the criticism of revealed preferences in economics.

As a hedonistic utilitarian, I can't quite understand why we would favor anything other than the "liking" response. Converting the universe to utilitronium producing real pleasure is my preferred outcome. (And fortunately, there's enough of a connection between my "wanting" and "liking" systems that I want this to happen!)

Comment author: Pablo_Stafforini 04 January 2010 12:39:52AM 1 point [-]

I agree that this is a great post. (I'm sorry I didn't make that clear in my previous comment.)

I can't quite understand your parenthetical remark. I though your position was that you wanted, rather than liked, experiences of liking to be maximized. Since you can want this regardless of whether you like it, I don't see why the connection you note between your 'wanting' and 'liking' systems is actually relevant.

Comment author: Utilitarian 04 January 2010 01:02:30AM *  2 points [-]

Actually, you're right -- thanks for the correction! Indeed, in general, I want altruistic equal consideration of the pleasure and pain of all sentient organisms, but this need have little connection with what I like.

As it so happens, I do often feel pleasure in taking utilitarian actions, but from a utilitarian perspective, whether that's the case is basically trivial. A miserable hard-core utilitarian would be much better for the suffering masses than a more happy only-sometimes-utilitarian (like myself).

Comment author: Pablo_Stafforini 04 January 2010 01:17:20AM 0 points [-]

Thanks for the clarification. :-)

Comment author: EvelynM 03 January 2010 11:29:15PM 8 points [-]

I noticed the distinction between wanting and liking as a result of my meditation practice. I began to derive great pleasure from very simple things, like the quality of an intake of breath, or the color combination of trees and sky.

And, I began to notice a significant decrease in compulsive wanting, such as for excess food, and for any amount of alcohol.

I also noticed a significant decrease in my startle reflex.

Similar results have been reported from Davidson's lab at the University of Wisconsin. http://psyphz.psych.wisc.edu/

Comment author: ThufirHawat 07 April 2013 04:37:40PM 0 points [-]

How do you meditate? I've tried to get into meditation before, but never found a variant I was comfortable with.

Comment author: EvelynM 08 April 2013 11:20:37PM 1 point [-]

I follow my breath. A meditation teacher can offer you constructive suggestions, specific to the difficulties you are having.

Comment author: bbleeker 28 January 2011 02:07:21PM 2 points [-]

You just convinced me to take up meditation again. :-)

Comment author: EvelynM 01 February 2011 08:17:37PM 1 point [-]

Thank you! Meditation continues to be a great benefit for me.

Comment author: k3nt 05 January 2010 05:23:39AM *  0 points [-]

Thanks for the link. I live in Madison and had no idea that this interesting stuff was being done here.

Comment author: EvelynM 07 January 2010 12:01:17AM 0 points [-]

You're welcome!

Comment author: CronoDAS 02 January 2010 11:59:11PM 3 points [-]

It's hard to track down specific things from that wireheading.com site, but this seems to be a good overview. Of particular note are a couple of excerpts:

The results of these experiments indicate that reinforcing brain stimulation may have two distinct effects: (a) it activates pathways related to natural drives, and (b) it stimulates reinforcement pathways normally activated by natural rewards. The empirical observations seem to contradict classic "drive-reduction" theories of reinforcement (reinforcement appears to be associated with increased drive in the EBS paradigm). However, it is not difficult to construct a plausible alternate hypothesis: Animals may self-stimulate because the stimulation provides the experience of an intense drive that is instantly reduced due to the concurrent activation of related reward neurons. This interpretation accounts neatly for many of the apparent paradoxes we have already encountered. Priming is necessary, according to this interpretation, because EBS reinforcement not only activates reward pathways but also provides the reason why that should be pleasurable (Deutsch, 1976). (This also accounts for rapid extinction, as well as the decreased efficacy of intermittent reinforcement.) The hypothesis assumes that the reinforcing properties of EBS are determined by the degree of activation of related motivational systems. It therefore accounts readily for the observed interactions between the reinforcing properties of a stimulus and various experimental conditions that affect related primary drives such as hunger. When there is little endogenous activity, for instance immediately after a meal, the stimulation elicits only a small amount of drive-related activity. Concurrent activation of related reward circuits therefore can produce only a small reinforcement effect. When hunger-related neural pathways are already active because of deprivation, the same stimulation elicits more drive and hence more reinforcement. Indeed, it may arouse the drive system sufficiently to elicit consumatory behavior that further potentiates the reinforcing effects of the electrical stimulation.

...

It is interesting to note that while the animal literature suggests that brain stimulation has positive, reinforcing effects, the human literature indicates that relief of anxiety, depression and other unpleasant affective conditions may be the most common "reward" of electrical brain stimulation in humans. Patients with electrodes in the septum, thalamus, and periventricular gray of the midbrain often express euphoria because the stimulation seems to reduce existing negative affective reactions (even intractable pain appears to loose its affective impact). However, many psychiatrists caution that this may not reflect an activation of a basic reward mechanism (Delgado, 1976; Heath et al., 1968). Relief from chronic anxiety has been reported during and even long after stimulation of frontal cortex. Again, the experiential response appears to be relief rather than reward per se (Crow&Cooper, 1972).

In general, it seems as though electrical brain stimulation isn't quite as effective at producing bliss as one might wish (or fear).

Comment author: MatthewB 02 January 2010 03:12:56PM 7 points [-]

I will need to go back through this again, but as a DD person, I know that my ability to motivate myself to learn new things was astronomical compared to after I destroyed most of the dopaminergic systems in my head with Drug Abuse.

The largest area I have noticed is in painting and sculpting. Two areas where I used to spend inordinate amounts of time practicing/doing. I used to have the vast majority of my work-spaces covered with miniatures and sculptures that I was working on. Now... I have a hard time getting motivated to just get them out (which is I think most of the problem).

I do know that it is possible for me to mechanically activate the motivation to perform these tasks (and I am on medication that is supposed to help, but I get the feeling it isn't), just like the rats were lacking motivation to eat when their "wanting" circuits were knocked out.

Thanks for the article. I will need to dig through some posts on another forum where I recently posted a link to a paper about modifying the brains of people with obsessive-compulsions (Drug Addicts mostly) who were able to knock out the wanting to do drugs part of their brain... I'll post the title and a link as soon as I can find the name of it. It talks about some of the same things (I think it is a U of Mich. study as well)

Comment author: spamham 02 January 2010 09:11:27PM 1 point [-]

Sorry to hear about the drug problems, but how can you be sure they "destroyed" your dopamine neurons? Not all drugs that increase these neurons' activity kill them. Psychological changes might be a simpler explanation IMHO (but I don't know you, so that might be far off the mark).

[...] knock out the wanting to do drugs part of their brain...

Sounds draconian. That part isn't just there for drugs...

Comment author: MatthewB 02 January 2010 09:36:46PM *  4 points [-]

I don't think that they destroyed the Dopamine neurons, just destroyed their ability to function properly. From the various scans that have been done of my brain; not only do I have a decreased production of Dopamine, but I have an increase in the number of receptor sites (I cannot recall from which area they sampled ). Thus, I have a major portion of dopamine sites that are demanding dopamine, and a shortage of dopamine to go around to satisfy the demand. I've been in so many MRI and NMR machines that I no longer even get claustrophobic.

As for the studies about creating lesions on the brain (knocking out the part of their brain that demands to do drugs)... Obviously it isn't there to want to do drugs.

It is there because it controls various aspects of our survival drives, yet they have been hijacked and malfunction due to the use/abuse (differentiation between the two) of various chemicals. The study is about the human trials of a procedure that was first done on rats and monkeys (Macaques I think) where they ablated a portion of the Amygdala and Thalamus (I cannot recall how they located it, as it was in the days before high resolution ƒMRI or NMRi), and the Rats and Monkeys went from being junkies (with either single or poly-substance dependence) to being relatively normal rats and monkeys. In the human trials, they found the same things as in the rat/monkey study, but with changes in some other behaviors in some of the participants (altered motivational drives, for instance). I know that one of the doctors is hoping to begin using this method on Sexual Predators, and also hopes to create a chemical method for altering the location of the brain that is ablated or abraided.

Anyway, I have made it through about six months of posts, and I am pretty sure that it was this year that I posted the link (in another forum... I could have sworn that I bookmarked it as well, but that might have been on my old laptop - I have a new laptop that was for "Christmas" even though I got it in November)

edit: found it:

The Neurosurgical Treatment of Addiction

Comment author: loqi 02 January 2010 11:55:32PM 1 point [-]

From the various scans that have been done of my brain; not only do I have a decreased production of Dopamine, but I have an increase in the number of receptor sites (I cannot recall from which area they sampled ). Thus, I have a major portion of dopamine sites that are demanding dopamine, and a shortage of dopamine to go around to satisfy the demand.

If you're comfortable sharing, what drugs led to this? Cocaine? Amphetamine? Did alcohol tend to be involved?

Comment author: MatthewB 03 January 2010 07:01:22AM 10 points [-]

Mostly it was Heroin, but there was a modest amount of Amphetamine usage involved as well (for completely patriotic reasons as well - /rolls eyes), and Cocaine became a problem for a few years, but strangely, I just stopped doing it one day like I would decide to throw out an old pair of underwear.

No alcohol was involved, which was mostly how I managed to get my brain into so many ƒMRI tunnels. I have never had any impairment from alcohol use, nor any dysfunction usage or abuse of alcohol either. Then, when several doctors found out about my anomalous cessation of cocaine, I got even more attention. That attention helped to free me from Heroin without the usual entanglement with a 12-step group or AA/NA (which at this point in time I have rather low opinions of).

I often wonder if I would still be alive if I hadn't started using these drugs though (which is contrary to what most people expect to hear). They do give a person a certain cognitive augmentation for each different drug, each of which can be highly useful depending upon the situation. I happened to be in a situation, during the 80s where amphetamines were indicated. I began to use the heroin because the amphetamines made me a little too shaky, and I liked the calm that the two drugs together gave me when having to do things... eventually though, all hell broke loose when I was no longer in that environment and still had the drug use (which rapidly turned into abuse). Fortunately, I am still alive and past that (well, the drug part of it. My brain still has some getting adjusted to life to do).

Comment author: Kevin 03 January 2010 11:01:17AM 0 points [-]

Is the current medication you refer to an anti-depressant? Does it do anything for you at all?

Comment author: MatthewB 03 January 2010 12:20:10PM 1 point [-]

Ugh.. I just made a huge post addressing an issue that I realized was not the one to which you are probably referring.

I don't think I referred to any current medications in the prior post. I made a reference to the use of the drugs I began to abuse, and how these allowed me to live through situations which would probably have resulted in a poor outcome otherwise (not that I could qualify the outcome as good either, save for the fact that I am alive instead of dead)...

Are you referring to the beginning of the third paragraph???

I often wonder if I would still be alive...

Comment author: spamham 03 January 2010 01:52:53PM 1 point [-]

Kevin means this I suppose?

I do know that it is possible for me to mechanically activate the motivation to perform these tasks (and I am on medication that is supposed to help, but I get the feeling it isn't)

Comment author: Kevin 04 January 2010 06:01:17AM 1 point [-]

Yes, thank you.

Comment author: MatthewB 03 January 2010 03:06:22PM 2 points [-]

Ah... That... Yes... from the previous post...

I am referring mostly to anti-depressants and Drugs to control ADD, which ironically, are very much like Amphetamines (Provigil, Adderall or Ritalin, probably Provigil or Ritalin). I did a two weeks on Provigil, and I will be doing 2 weeks on Ritalin to compare the two. It is unlikely that my Dr would prescribe Adderall, but she said it isn't totally out of the question depending upon how I respond to the others (and the fact that I haven't shown any signs that I would be likely to abuse it at this point).

The current medications I am on work to a degree. I can tell when I am off my anti-depressants, for instance, yet my anti-anxiety drugs do absolutely nothing.

The drugs to control ADD are kinda a fudge by the Dr. as I have not been diagnosed explicitly as having ADD (it is something that she suspects, yet for which I haven't displayed many of the more common symptoms. If my mother had not been a Christian Scientist when I was a kid, we might have clinical records that could help out in this case a bit more), yet she feels that they will help out with some of the motivational and concentration problems I have been having with school (and life).

Comment author: wedrifid 02 January 2010 04:26:10AM *  8 points [-]

A University of Michigan study analyzed the brains of rats eating a favorite food. They found separate circuits for "wanting" and "liking", and were able to knock out either circuit without affecting the other (it was actually kind of cute - they measured the number of times the rats licked their lips as a proxy for "liking", though of course they had a highly technical rationale behind it).

One could come up with a story about how people are motivated to act selfishly but enjoy acting morally, which allows them to tell others a story about how virtuous they are while still pursuing their own selfish gain.

Rats! The neuroscientists were studying rats. It is troubling how easy it is to come up with these signalling stories to explain whatever observations we encounter.

What explanation can be suggested for a different mechanism for enjoying a food than the one for motivation to get food that doesn't rely on impressing our little rat friends with our culinary sophistication?

Comment author: Blueberry 03 January 2010 08:14:40AM *  1 point [-]

I'm not sure why this comment got upvoted so much. If I understand what you're saying correctly, you have it exactly backwards. The signaling story wasn't intended to explain the two different mechanisms, which evolved long before humans. The signaling story is just one way that the two different mechanisms affect our lives today.

Comment deleted 02 January 2010 04:25:25AM [-]
Comment author: spamham 02 January 2010 03:23:46AM 1 point [-]

Seems like a pretty large leap from certain simple behaviours of rats to the natural-language meaning of "wanting" and "liking". Far-reaching claims such as this one should have strong evidence. Why not give humans drugs selective for either system and ask them? (Incidentally, at least with the dopamine system, this has been done millions of times ;) The opioids are a bit trickier because activating mu receptors (e.g. by means of opiates) will in turn cause a dopamine surge, too)

(Yes, I should just read the paper for their rationale, but can't be bothered right now...)

Comment author: Larks 01 January 2010 11:32:29PM 5 points [-]

On a related note, it seems people do not use 'happy' and 'unhappy' as opposite, at least when they're referring to a whole life. Rather, happiness involves normative notions (a good life) whereas being unhappy is simply about endorphins.

http://experimentalphilosophy.typepad.com/experimental_philosophy/2009/12/can-.html

Comment author: FrankAdamek 02 January 2010 09:08:20PM 2 points [-]

Culturally it may generally be considered too much of a blow to ever say someone is unhappy in general, or has an unhappy life. Or it may be too depressing for the people themselves to think, that the other person was unhappy, is unhappy, and will continue to be unhappy, rather than just happens to be unhappy now.

Comment author: Larks 03 January 2010 12:22:42AM 1 point [-]

If that were true, we’d expect to see people more willing to pronounce a ‘happy’ verdict than a ‘sad’ verdict, but the link I posted suggests that people are willing to agree that the wholesome woman is unhappy if she thinks she is, but unwilling to say the superficial woman is happy, even though she thinks she is.

Comment author: SilasBarta 01 January 2010 09:07:49PM *  24 points [-]

Interesting article, but these really bugged me:

1) Using the environment as an example of false revealed preference. One person's pollution never ruins "the environment", at least not "their environment". The environment is only ruined by the aggregate effects of many people's pollution; or, the person is massively polluting a different environment.

Environmental solutions require collective agreement and enforcement, not unilateral disarmament. So polluting while claiming to value the environment is not hypocrisy even in the conventional sense of the term (that you criticize here).

And this is at least the second time I've explained this to you. Please stop using it as an example.

2) This phrasing:

There are enough connections between them that there's a big correlation in their activity, but the correlation isn't one ...

That initially reads like you're saying "the correlation isn't a correlation" so I had to re-read it. I recommend using any of the following terms as a replacement for the bolded word: perfect, unity, 1.0, 1, or "equal to one", any of which would have been clearer.

(Btw, I agree with your disrecommendation of Landsburg!)

Comment author: Alexei 11 July 2011 05:55:29AM 2 points [-]

I really like your point about the environment. I am wondering if you can make a broader post discussing that kind of reasoning. For example, could one argue using this logic that an individual voter makes no difference, therefore voting, on the individual level, is pointless? (The solution would be to organize massive groups of people that would vote the same way.) What other examples fall under this reasoning? And what are some examples that seem like they should fall under this reasoning, but don't?

Comment author: SilasBarta 11 July 2011 04:39:39PM *  5 points [-]

Thanks for bringing that up. I've actually argued the opposite in the case of voting. Using timeless decision theory, you can justify voting (even without causing a bunch of people to go along with you) on the grounds that, if you would make this decision, the like-minded would reason the same way. (See my post "real-world newcomb-like problems".)

I think a crucial difference between the two cases is that non-pollution makes it even more profitable for others to pollute, which would make collective non-pollution (in the absence of a collective agreement) an unstable node. (For example, using less oil bids down the price and extends the scope of profitable uses.)

Comment author: Randaly 21 January 2012 04:55:08AM *  1 point [-]

Using timeless decision theory, you can justify voting (even without causing a bunch of people to go along with you) on the grounds that, if you would make this decision, the like-minded would reason the same way.

Given that probably only ~2,000 people know of TDT at all, only ~500 would think of it in this context, these people aren't geographically concentrated, these people aren't overwhelming concentrated in any one political party, at least some of the people considering TDT don't believe that it is a strong argument in favor of voting (example: me), and the harms from voting scale up linearly with the number of people voting, it's exceedingly unlikely that TDT serves a significant justification for voting. (As a bit of context: in 2000, Bush won Florida by over 500 votes.)

Comment author: agrajag 14 November 2011 10:24:26AM 4 points [-]

Getting this point across is difficult, and it's a common problem. For example, I'm from Norway and favor the system we have here with comparatively high taxes on the high earners, and high benefits. When I discuss economics with people from other political systems, say Americans, invariably I get a version of the same:

If I'm happy to pay higher taxes, then I can do that in USA too -- I can just donate to charities of my choice. As an added bonus, this would let me pick which charities I care most about.

The problem is the same as the polluting though: By donating to charities, I reduce the need for government-intervention, which again reduces the need for taxes, which mostly benefit those people paying most taxes.

That is, by donating to charities, I reward those people who earn well and (imho) "should" contribute more to society (by donating themselves) but don't.

So that situation is unstable: The higher the fraction of needed-support is paid for trough charitable giving, the larger is the reward for not giving.

Comment author: phob 20 December 2013 09:37:24PM 0 points [-]

This is a really good point. On the other hand, it is a more convincing argument for stronger interventionist policy than it is against charity.

Comment author: SilasBarta 14 November 2011 03:20:24PM 1 point [-]

Glad to hear your take on the issue and know that I'm not alone in having to explain this. Coincidentally, I just recently put up a blog post discussing the unilateral disarmament issue in the context of taxes, making similar points to you (though not endorsing higher tax rates).

Comment author: Alexei 11 July 2011 11:48:30PM 2 points [-]

Oh, I see! I missed the key factor that by playing strategy NOT X (not polluting) you make strategy X (polluting) more favorable for others. And, of course, that doesn't apply to voting. This helps draw the line for what kind of problems you can use this reasoning. Thanks for clarifying!

Comment author: asr 21 January 2012 03:47:24AM 2 points [-]

It does apply to voting. The fewer the number of voters, the more valuable an individual vote is....

Comment author: Vladimir_Nesov 01 January 2010 07:19:31PM *  16 points [-]

Thus wanting (motivation) is near, liking (enjoyment) is far (dopamine is near, opioids are far!). If liking doesn't have the power to make you actually do things, its role is primarily in forming your beliefs about what you want, which leads to presenting good images of yourself to others with sincerity.

So far, this is not a disagreement with "revealed preferences" thought. The disagreement would come in value judgment, where instead of taking the side of wanting (as economists seem to), or the side of liking (naive view, or one of the many varieties of moral ideologies), one carefully considers the virtues on case-to-case basis, being open to discard parts from either category. True preference is neither revealed nor felt.

Comment author: Blueberry 01 January 2010 06:30:39PM 8 points [-]

The other problem here is distinguishing pleasure, fun, and happiness.

As I understand wireheading, it's equivalent to experiencing a lengthy orgasm. I would describe an orgasm as pleasurable, but it seems inaccurate to call it "fun", or to call the state of experiencing an orgasm as "happiness".

Comment author: dclayh 01 January 2010 06:24:07PM *  8 points [-]

For the record, the actual Landsburg quote is

In one recent survey, 39 percent of New Yorkers said they would leave the city "if they could"! Every one of them was in New York on the day of the interview, so we know that at a minimum, 39 percent of New Yorkers lie to pollsters.

page 63 of his latest book The Big Questions.

Although I'm generally a big fan of Landsburg, this seems much more a case of confusion over what "leave the city" and "if you can" mean than one of lying.

Comment author: Yvain 01 January 2010 08:06:12PM 3 points [-]

I was reading More Sex is Safer Sex, so he must like using this anecdote a lot.

Comment author: cousin_it 01 January 2010 06:07:39PM 16 points [-]

Great post. It raised a question for me: why did evolution give us the pleasure mechanism at all, if the urge mechanism is sufficient to make us do stuff?

Comment author: timtyler 01 January 2010 06:34:40PM 17 points [-]

The "urge" mechanism does not help us learn to do rewarding things.

Comment author: Furcas 01 January 2010 09:10:14PM *  13 points [-]

I agree that pleasure has something to do with learning, but I don't see why the "urge" or "desire" mechanism couldn't help us learn to do rewarding things without the existence of pleasure.

Without pleasure, things could work like this: If X is good for the animal, make the animal do X more often.

With pleasure, like this: If X is good for the animal, make the animal feel pleasure. Make the animal seek pleasure. (Therefore the animal will do X more often.)

So pleasure would seem to be a kind of buffer. My guess is that its purpose is to reduce the number of modifications to the animal's desires, thereby reducing the likelihood of mistaken modifications, which would be impossible to override.

Comment author: cousin_it 01 January 2010 06:40:23PM *  2 points [-]

Awesome answer, thanks! So the "urge" mechanism is for things we know how to do, and the "pleasure" mechanism is for things we don't? Now I wonder how to test this idea.

Comment author: whpearson 01 January 2010 07:11:15PM 6 points [-]

If dopamine = urge you can make dopamine deficient mice. They don't learn so well...

Comment author: timtyler 01 January 2010 08:22:01PM 4 points [-]

I would say the study supports the thesis that they learn, but then aren't motivated to act:

"A retest after 24 h showed that DD mice can learn and remember in the absence of dopamine, leading to the inference that the lack of dopamine results in a performance/motivational decrement that masks their learning competence for this relatively simple task."

Comment author: whpearson 01 January 2010 09:01:48PM 0 points [-]

I really wish I could get into that paper. I'd like to know whether was dopamine precursor given to the rats before the retest to enable eating? If so the learning may have been buffered and acquired through sleep or there is a different method for learning in sleep. I'll see if I can get to it in the next few days.

I'd agree that some learning did occur without dopamine, the knowledge of where to go was learnt. The brain is to complex to mediate all learning with direct feedback. What we are interested in is learning what should be done. That is the behaviour was learnt but that the behaviour should be performed wasn't immediately learnt. Or in other words it didn't know it should be motivated.

There is lots of work on dopamine and learning. I'm currently watching another interesting video on the subject.

Do you know of any related to opioids? All I can find some stuff on fear response learning.

Comment author: Alicorn 01 January 2010 10:21:54PM 5 points [-]

My school let me at the full text of the paper; here 'tis.

Comment author: whpearson 02 January 2010 01:18:11AM 0 points [-]

Thanks. I'll read it later.

Comment author: Vladimir_Nesov 01 January 2010 07:42:14PM *  3 points [-]

A bad experiment specification: it only tests that brains don't work so well after you damage them. (That is, more detail is absolutely necessary in this case.)

Comment author: whpearson 01 January 2010 07:58:01PM *  4 points [-]

From the link (pretty much the whole content unless you have access)

Dopamine-deficient (DD) mice have selective inactivation of the tyrosine hydroxylase gene in dopaminergic neurons, and they die of starvation and dehydration at 3-4 weeks of age. Daily injections of L-DOPA (50 mg/kg, i.p.) starting approximately 2 weeks after birth allow these animals to eat and drink enough for survival and growth. They are hyperactive for 6-9 h after receiving L-DOPA and become hypoactive thereafter. Because these animals can be tested in the presence or absence of DA, they were used to determine whether DA is necessary for learning to occur. DD mice > were tested for learning to swim to an escape platform in a straight alley in the presence (30 min after an L-DOPA injection) or absence (22-24 h after an L-DOPA injection) of dopamine. The groups were split 24 h later and retested 30 min or 22-24 h after their last L-DOPA injection. In the initial test, DD mice without dopamine showed no evidence of learning, whereas those with dopamine had a learning curve similar in slope to controls but significantly slower. A retest after 24 h showed that DD mice can learn and remember in the absence of dopamine, leading to the inference that the lack of dopamine results in a performance/motivational decrement that masks their learning competence for this relatively simple task.

That is: The mice were made so they couldn't manufacture dopamine naturally due to inability to make a precursor. Some were then given a dopamine precursor suplement they couldn't make that enabled them to make dopamine. These learnt almost as well as mice that could manufacture dopamine by themselves. So they showed that they could make the mice learn almost as well as those without damage by replacing something that was lost.

If this doesn't narrow down things enough, what more do you want?

Comment author: timtyler 01 January 2010 08:18:32PM 2 points [-]

The dopamine and opiate mechanisms are rather tangled together in practice:

The following study tests the hypothesis that dopamine is an essential mediator of various opiate-induced responses:

http://www.nature.com/nature/journal/v438/n7069/full/nature04172.html

Comment author: Matt_Simpson 01 January 2010 06:27:57PM 1 point [-]

My first thought: redundancy. Having multiple circuits for the same task means there is a higher probability that at least one of them is working. However, this doesn't explain the differences between the two circuits.

Comment author: Pablo_Stafforini 01 January 2010 05:24:12PM *  11 points [-]

By definition, if you choose X over Y, then X is a higher utility option than Y. That means utility represents wanting and not liking. But good utilitarians (and, presumably, artificial intelligences) try to maximize utility. This correlates contingently with maximizing happiness, but not necessarily

You are equivocating on the term 'utility' here, as have so many other commenters before in this forum. In the first sentence above, 'utility' is used in the sense given to that term by axiomatic utility theory. When the preferences of an individual conform to a set of axioms, they can be represented by a 'utility function'. The 'utilities' of this individual are the values of that function. By contrast, when ethicists discuss utilitarianism, what they mean by 'utility' is either pleasure or good. The empirical studies you cite, therefore, do not pose problems for utility theory or utilitarianism. They only pose problems for the muddled view on which utility functions represent that which hedonistic utilitarians think we ought to maximize.

Comment author: Tyrrell_McAllister 01 January 2010 05:45:18PM 1 point [-]

You are equivocating on the term 'utility' here, as have so many other commenters before in this forum.

That seems to me to be an unfair reading. Nowhere does Yvain say that he's using the axiomatic theory of utility. It's true that he writes, "By definition, if you choose X over Y, then X is a higher utility option than Y." However, this definition can hold in other theoretical frameworks besides axiomatic utility theory. In particular, the definition plausibly holds in the framework used by some ethical utilitarians. Yvain can therefore be read as using the same definition for utility throughout.

Comment author: Yvain 01 January 2010 05:56:48PM 8 points [-]

I accept Benthamite's criticism as valid. It may not be obvious from the text, but in my mind I was definitely equivocating.

If we can't use preference to determine ethical utility, it makes ethical utilitarianism a lot harder, but that might be something we have to live with. I don't remember very much about Coherent Extrapolated Volition, but my vague memories say it makes that a lot harder too.

Comment author: Eliezer_Yudkowsky 01 January 2010 08:12:12PM 11 points [-]

I observe that you might have caught this mistake earlier via this heuristic: "Using the phrase "by definition", anywhere outside of math, is among the most alarming signals of flawed argument I've ever found. It's right up there with "Hitler", "God", "absolutely certain" and "can't prove that"." I should probably rewrite "math" as "pure math" just to make this clearer.

Comment author: Vladimir_Nesov 01 January 2010 07:27:14PM *  0 points [-]

If we can't use preference to determine ethical utility, it makes ethical utilitarianism a lot harder [...]

The way "preference" tends to be used in this community (as a more general word for "utility", communicating the same idea without explicit reference to expected utility maximization), this isn't right either. The actual decisions should be higher in utility than their alternatives, it is preferable if they are higher utility, but the correspondence is far from being factual, let alone "by definition" (Re: "By definition, if you choose X over Y, then X is a higher utility option than Y"). One can go a fair amount from actions to revealed preference, but only modulo human craziness and stupidity.