We're probably going to develop the technology to directly produce pleasure in the brain with electrical stimulation. We already do this to some extent, though with the goal of restoring people to normal function. This poses a similar question to drugs, but potentially without the primary downsides: wireheading may cause intense undiminising pleasure. [1]

Like many people my first reaction to the idea is negative. Joy from wireheading strikes me as much less valuable than joy from social interaction, flow, warm fuzzies, or music. But perhaps we discount pleasure from drugs only because their overall effect on people's lives tends to be negative, and there's not actually anything lesser about that kind of happiness. If there's some activity in the brain that corresponds to the joy I value, direct stimulation should if anything be better: we can probably get much more pure, much more intense pleasure with careful application of electricity.

Maybe wireheading would grow into a role as a kind of special retirement for rich people: once you've saved enough money to pay other people to take care of your physical needs, you plug in and spend the rest of your years euphorically. Perhaps there's a cycle of increasing popularity and decreasing price. If it became cheap enough and made people happy enough, a charity to provide this for people who couldn't otherwise afford it might be more cost-effective at increasing happiness than even the best ones trying to reduce suffering elsewhere.

Even then, there's something dangerous and strange about a technology that could make people happy even if they didn't want it to. If what matters is happiness it's nearly unimportant whether someone wants to wirehead; even if they hate the idea, the harm of forcing it on them would be much less than the benefit of them being really happy for the rest of their life. Imagine it becomes a pretty standard thing to do at age 30, after fifteen years of hard work to save up the money, but a few holdouts reject wireheading and want to stay unstimulated. A government program to put wires in people's brains and pleasurably stimulate them against their will sounds like dystopic science fiction, but could it be the right thing to do? Morally obligatory, perhaps? Even after accounting for the side effect where other people are unhappy and upset about it?

Even if I accept the pleasure of wireheading as legitimate, I find the idea of forcing it upon people over their objections repellant. Maybe there's something essentially important about preferences? Instead of trying to maximize joy and minimize suffering, perhaps I should be trying to best satisfy people's perferences? [2] In most cases valuing preference satisfaction is indistinguishable from valuing happiness: I would prefer to eat chocolate over candy because I think the chocolate would make me happier. I prefer outcomes where I am happier in general, but because I don't like the idea of being forced to do things even if I agree they would make me happier, valuing preferences seems reasonable.

Preferences and happiness don't always align, however. Consider a small child sitting in front of a fire. They point at a coal and say "Want! Want!", being very insistent. They have a strong preference for you to give them the coal, but of course if you do they will experience a lot of pain. Clearly you shouldn't give it to them. Parenting is full of needing to consider the child's long term best interest over their current preferences.

Or consider fetuses too young to have preferences. I visit a society where it is common to drink a lot, even when pregnant, and everyone sees it as normal. Say they believe me when I describe the effects of large quantities of alcohol on fetal development but reject my suggestion that they reduce their consumption: "the baby doesn't care." A fetus early enough in its development not to be capable of preferring anything can still be changed by alcohol. It seems wrong to me to ignore the future suffering of the child on the grounds that it currently has no preferences. [3]

In regard to death these again disagree. Previously it seemed to me that death was bad in that it can be painful to the person, sorrowful for those left behind, and tragic in cutting short a potentially joyful life. But if preferences are what matter then death is much worse: many people have very strong preferences not to die.

I'm not sure how to reconcile this. Neither preferences nor joy/suffering seem to give answers consistent with my intuitions. (Not that I trust them much.) Both come close, though I think the latter comes closer. Treating wireheading-style pleasure as being a lesser kind than real well-being might do it, but if people have strong preferences against something that would give them true happiness there's still an opening for a very ugly paternalism, and I don't see any real reason to discount pleasure from electrical stimulation. Another answer would be that value is more complex than either preferences or happiness and that I just don't fully understand it yet. But when happiness comes so close to fitting I have to consider that it may be right and the ways a value system grounded on happiness differ from my intuitions are problems with my intutions.

(I also posted this on my blog)

[1] Yvain points out that wireheading experiments may have been stimulating desire instead pleasure. This would mean you'd really want to get more stimulation but aren't actually enjoying it. Still, it's not a stretch to assume that we can figure out what we're stimulating and go for pleasure, or perhaps both pleasure and desire.

[2] Specifically, current preferences. If I include future preferences then we could just say that while someone currently doesn't want to be wireheaded, after it is forced upon them and they get a taste of it they may have an even stronger preference not to have the stimulation stop.

[3] Future people make this even stronger. The difference between a future with 10 billion simultaneous happy people and one with several orders of magnitude more seems very important, even though they don't currently exist to have preferences.

New Comment
53 comments, sorted by Click to highlight new comments since:

This is a perfect demonstration of what I wrote recently about moral theories that try to accomplish too much.

Moral/ethical theories are attempts to formalize our moral intuitions. Preference utilitarianism extrapolates the heuristic "give people what they want", and eventually hits the question "but what if they want something that's bad for them?" Happiness utilitarianism extrapolates the heuristic "make people happy", and eventually hits the question "but what if they don't want to be happy?" Thus, they conflict.

This conflict is unavoidable. If extrapolation of any one moral heuristic was easy and satisfactory in all cases, we wouldn't have multiple heuristics to begin with! Satisfying a single simple human desire, and not others, is undesirable because humans have independent and fundamentally conflicting desires which can't be mapped to a single linear scale to be maximized.

IMO, attempts to define a sufficiently ingenious and complex scale which could be maximized in all cases have the wrong goal. They disregard the fundamental complexity of human value.

People like simple, general theories. They want a formal model of morality because it would tell them what to do, absolve them of responsibility for moral mistakes, and guarantee agreement with others even when moral intuitions disagree (ha). This isn't just a hard goal to reach, it may be the wrong goal entirely. Moral heuristics with only local validity (as described in the comment I linked above) may be not just be easier but better, precisely because they avoid repugnant conclusions and don't require difficult, improbable behavior in following them.

Worrying about or valuing the pain, pleasure or whatever of people who don't yet exist or who might never exist is something that has always struck me as ludicrous around here. Don't get me wrong, I'm not saying it doesn't make sense, I am saying it just has no resonance with me.

Seeing all the other posts over the months that lead me to believe/realize that morality IS based on moral intuitions, and that moral systems are an attempt to come up with something which is more internally consistent than the intuitions on which it is based are, I feel comfortable (relatively) saying: I don't care about people who don't exist. I am not creating less good by not creating people who could then have some net good to multiply by their numbers and thus earn me brownie points with the god which does not exist anyway.

So my comment on this post: I don't value a billion wireheaders any more than I value a billion non-wireheaders, and I don't value 10 billion wireheaders that do not yet exist at all. IF wireheading sucks your brains out like a zombie (that is, if it renders you incapable of doing anything other than finding that wire to reconnect to) than my evolutionary sensibilities suggest to me that the societies the future will care about, because they have outcompeted other societies and still exist, are the one that have one way or another kept wireheading from being an acceptable choice. Whether through banning, death penalties, or some remarkable insertion of memes into the culture that are powerful enough to overpower the wire, it barely matters how. Societies that lose much of their brain talent to wireheading will be competed away by societies that don't.

I'm not sure if in my intuitions that makes it right or not. I'm not sure my intuitions, or morality qua morality for that matter, are the important question anyway.

Yvain points out that wireheading experiments may have been stimulating desire instead pleasure. This would mean you'd really want to get more stimulation but aren't actually enjoying it.

Huh, exactly what randomly browsing the Web when I'm tired does to me.

An upload, could be even easier to maintain happy for many billion of years, via some software equivalent of pleasure providing.

The semi eternal happiness could be achieved this way.

[-]maia40

It seems to me that there are multiple types of utility, and wireheading doesn't satisfy all of them. Things like "life satisfaction" and "momentary pleasure" are empirically separable (as described in e.g. Thinking, Fast and Slow), and seem to point to entirely different kinds of preferences, different terms in the utility function... such that maximizing one at the complete expense of all others is just not going to give you what you want.

How can there be multiple types of utility? There's going to be a trade-off, so you have to have some system for how much you value each kind. This gives you some sort of meta-utility. Why not throw away the individual utilities and stick with the meta-utility?

[-]maia00

That could work... if you take into account the behavior where, if you don't get enough of one kind of utility, your meta-utility might actually go down.

Utility is whatever you're trying to maximize the expected value of. If you act in a way that maximizes the expected value of log(happiness) + log(preference fulfillment), for example, this doesn't mean that you're risk averse and you have two different kinds of utility. It means that your utility function is log(happiness) + log(preference fulfillment).

[-]maia00

That's true. I've been using the term "utility" in a way that is probably wrong. What I really meant is that humans have different kinds of things they want to maximize, and get unhappy if some of them are fulfilled and others aren't... so their utility functions are complicated.

The obvious problem with standard wireheading is that it only maximizes one of the things humans want.

True, but it seems to me that there's no reason you can't have good wireheading which stimulates all the different types of fuzzies.

[-]maia00

Also true, and I'm not sure that's not ethically sound.

It seems wrong to me to ignore the future suffering of the child on the grounds that it currently has no preferences

I don't see why you would ignore it. They will have preferences, and many of their preferences will go unfulfilled due to fetal alcohol syndrome.

Also, it seems like desire fulfillment just alters the kind of wireheading you do. Rather than modifying people to make them happy, you modify them to desire what currently is true.

Am I missing something about wireheading?

If you consider both current and future preferences then forcing wireheading on people could still be obligatory. If you believe we shouldn't force someone to accept a wire even if they agrees that after you do it they will be very glad you did, then you value current preferences to the exclusion of future ones. But if you base your moral system entirely on current preferences then it's unclear what to do with people who don't yet have preferences because they're too young (or won't even be born for decades).

it's unclear what to do with people who don't yet have preferences

Ignore them. More generally, consider them as a threat, since they are currently-amoral threats to future resource allocation.

As with all moral intuitions, it feels odd when people disagree with me about this :-)

If you believe we shouldn't force someone to accept a wire even if they agrees that after you do it they will be very glad you did, then you value current preferences to the exclusion of future ones. But if you base your moral system entirely on current preferences then it's unclear what to do with people who don't yet have preferences because they're too young (or won't even be born for decades).

The cause of this dilemma is that you've detached the preferences from the agents that have them. If you remember that what you actually want to do is improve people's welfare, and that their welfare is roughly synonymous with satisfying their preferences, this can be resolved.

-In the case of forcing wireheading on a person, you are harming their welfare because they would prefer not to be wireheaded. It doesn't matter if they would prefer differently after they are modified by the wire. Modifying someone that heavily is almost equivalent to killing them and replacing them with another brand new person, and therefore carries the same approximate level of moral wrongness.

-In the case of fetal alcohol syndrome you are harming a future person's welfare because brain damage makes it hard for them to pursue whatever preferences they'll end up having.

Again, you aren't trying to satisfy some huge glob of preferences. You're trying to improve people's welfare, and therefore their preference satisfaction.

In my response to DanielLC I'm arguing against a kind of preference utilitarianism. In the main post I'm talking about how I'm not happy with either preference or hedonic utilitarianism. It sounds to me like you're proposing a new kind, "welfare utilitarianism", but while it's relatively clear to me how to evaluate "preference satisfaction" or "amount of joy and suffering" I don't fully see what "welfare" entails. I think I would understand you better if you could break down the details of how forcing wireheading on a person harms their welfare.

I think I would understand you better if you could break down the details of how forcing wireheading on a person harms their welfare.

Radically changing someone's preferences in such a manner is essentially the same as killing them and replacing them with someone else. Doing this is generally frowned upon. For instance, it is generally considered a good thing to abort a fetus to save the life of the mother, even by otherwise ardent pro-lifers. While people are expected to make some sacrifices to ensure the next generation of people is born, generally killing one person to create another is regarded as too steep a price to pay.

The major difference between the wireheading scenario, and the fetal alcohol syndrome scenario, is that the future disabled person is pretty much guaranteed to exist. By contrast, in the wireheading scenario, the wireheaded person will exist if and only if the current person is forced to be wireheaded.

So in the case of the pregnant mother who is drinking, she is valuing current preferences to the exclusion of future ones. You are correct to identify this wrong. Thwarting preferences that are separated in time is no different from thwarting ones separated in space.

However, in the case of not wireheading the person who refuses to be wireheaded, you are not valuing current preferences to the exclusion of future ones. If you choose to not force wireheading on a person you are not thwarting preferences that will exist in the future, because the wireheaded person will not exist in the future, as long as you don't forcibly wirehead the original person.

To go back to analogies, choosing to not wirehead someone who doesn't want to be wireheaded isn't equivalent to choosing to drink while pregnant. It is equivalent to choosing to not get pregnant in the first place because you want to drink, and know that drinking while pregnant is bad.

The wireheading scenario that would be equivalent to drinking while pregnant would be to forcibly modify a person who doesn't want to be wireheaded into someone who desperately wants to be wireheaded, and then refusing to wirehead them.

The whole thing about "welfare" wasn't totally essential to my point. I was just trying to emphasize that goodness comes from helping people (by satisfying their preferences). If you erase someone's original preferences and replace them with new ones you aren't helping them, you're killing them.

Only worrying about the desires of current people seems, for lack of a better word, prejudice.

Also, it opens the question of people are spacelike events. Do they count as past, or future? For that matter, what about people who you control acausally? If you count them, should you try to desire something people used to have, on the basis that they're similar to you, which makes them more likely for them to have desired it? If you don't, didn't you just eliminate everyone?

Also, it seems like desire fulfillment just alters the kind of wireheading you do. Rather than modifying people to make them happy, you modify them to desire what currently is true.

Most people would strongly desire to not be modified in such a fashion. It's really no different from wire-heading them to be happy, you're destroying their terminal values, essentially killing a part of them.

Of course, you could take this further by agreeing to leave existing people's preferences alone, but from now on only create people who desire what is currently true. This seems rather horrible as well, what it suggests to me is that there are some people with preference sets that it is morally better to create than others. It is probably morally better to create human beings with complex desires than wireheaded creatures that desire only what is true.

This in turn suggests to me that, in the field of population ethics, it is ideal utilitarianism that is the correct theory. That is, there are certain ideals it is morally good to promote (love, friendship, beauty, etc.) and that therefore it is morally good to create people with preferences for those things (i.e. creatures with human-like preferences).

Most people would strongly desire to not be modified in such a fashion.

Yes, but only until they're modified. The desire fulfillment of their future selves will outweigh the desire unfulfillment of their present selves, resulting in a net increase in desire fulfillment.

Okay, I've been reading a bit more on this and I think I have found an answer from Derek Parfit's classic "Reasons and Persons." Parfit considers this idea in the section "What Makes Someone's Life Go Best," which can be found online here, with the relevant stuff starting on page 3.

In Parfit's example he considers a person who argue that he is going to make your life better by getting you addicted to a drug which creates an overwhelming desire to take it. The drug has no other effects, it does not cause you to feel high or low or anything like that, all it does is make you desire to take it. After getting you addicted this person will give you a lifetime supply of the drug, so you can always satisfy your desire for it.

Parfit argues that this does not make your life better, even though you have more satisfied desires than you used to. He defends this claim by arguing that, in addition to our basic desires, we also have what he calls "Global Preferences." These are 2nd level meta-preferences, "desires about desires." Adding or changing someone's desires is only good if it is in accordance with their "Global Preferences." Otherwise, adding to or changing their desires is bad, not good, even if the new desires are more satisfied than their old ones.

I find this account very plausible. It reminds me of Yvain's posts on Wanting, Liking, and Approving.

Parfit doesn't seem to realize it, but this theory also provides a way to reject his Mere Addition Paradox. In the same way that we have global preferences about what our values are, we can also have global moral rules about what amount and type of people it is good to create. This allows us to avoid both the traditional Repugnant Conclusion, and the far more repugnant conclusion that we ought to kill the human race and replace them with creatures whose preferences are easier to satisfy.

Now, you might ask, what if, when we wirehead someone, we change their Global Preferences as well, so they now globabally prefer to be wireheaded? Well, for that we can invoke our global moral rules about population ethics. Creating a creature with such global preferences, under such circumstances, is always bad, even if it lives a very satisfied life.

Are you saying that preferences only matter if they're in line with Global Preferences?

Before there was life, there was no Global Preferences, which means that no new life had preferences in accordance with these Global Preferences, therefore no preferences matter.

I'm saying creating new preferences can be bad if they violate Global Preferences. Since there were no Global Preferences before life began, the emergence of life did not violate any Global Preferences. For this reason the first reasoning creatures to develop essentially got a "free pass."

Furthermore, even if a preference is bad to create in the first place because it violates a Global Preference, that does not mean satisfying that newly created preference is bad. Parfit uses the following example to illustrate this: If I am tortured this will create a preference in me for the torture to stop. I have a strong Global Preference to never have this preference for the torture to stop come to exist in the first place. But once that desire is created, it would obviously be a good thing if someone satisfied it by ceasing to torture me.

Similarly, it would be a bad thing if the guy in Parfit's other example got you addicted to the drug, and then gave you the drugs to satisfy your addiction. But it would be an even worse thing if he got you addicted, and then didn't give you any drugs at all.

The idea that there are some preferences that it is bad to create, but also bad to thwart if they are created, also fits neatly with our intuitions about population ethics. Most people believe that it is bad for unwanted children to be born, but also bad to kill them if we fail to prevent them from being born (providing, of course, that their lifetime utility will be a net positive).

Furthermore, even if a preference is bad to create in the first place because it violates a Global Preference, that does not mean satisfying that newly created preference is bad.

Doesn't that mean that if you satisfy it enough it's a net good?

If you give someone an addicting drug, this gives them a Global Preference-violating preference, causing x units of disutility. Once they're addicted, each dose of the drug creates y units of utility. If you give them more than x/y doses, it will be net good.

I have a strong Global Preference to never have this preference for the torture to stop come to exist in the first place.

What's so bad about being against torture? I can see why you'd dislike the events leading up to this preference, but the preference itself seems like an odd thing to dislike.

Doesn't that mean that if you satisfy it enough it's a net good?

No, in Parfit's initial example with the highly addictive drug your preference is 100% satisfied. You have a lifetime supply of the drug. But it still hasn't made your life any better.

This is like Peter Singer's "debit" model of preferences where all preferences are "debts" incurred in a "moral ledger." Singer rejected this view because if it is applied to all preferences it leads to antinatalism. Parfit, however, has essentially "patched" the idea by introducing Global Preferences. In his theory we use the "debit" model when a preference is not in line with a global preference, but do not use it if the preference is in line with a global preference.

What's so bad about being against torture? I can see why you'd dislike the events leading up to this preference, but the preference itself seems like an odd thing to dislike.

It's not that I dislike the preference, it's that I would prefer to never have it in the first place (since I have to be tortured in order to develop it). I have a Global Preference that the sorts of events that would bring this preference into being never occur, but if they occur in spite of this I would want this preference to be satisfied.

If you dislike that example, however, would you still agree that if someone forcibly addicted you to Parfit's hypothetical drug, it would be better if they gave you a lifetime supply of the drug than if they did not? (Assuming, of course, that taking the drug has no bad side effects, and getting rid of the addiction is not possible).

No, in Parfit's initial example with the highly addictive drug your preference is 100% satisfied.

What if it's a preference that doesn't have a maximum amount of satisfaction? For example, if you get a drug that makes you into a paperclip maximizer, you can always add more paperclips. Does that mean that your preference is always 0% satisfied?

If you dislike that example, however, would you still agree that if someone forcibly addicted you to Parfit's hypothetical drug, it would be better if they gave you a lifetime supply of the drug than if they did not?

Only if it makes me happy. I'm not a preference utilitarian.

Me being addicted to a drug and getting it is no higher on my current preference ranking than being addicted to a drug and not getting it.

What if it's a preference that doesn't have a maximum amount of satisfaction? For example, if you get a drug that makes you into a paperclip maximizer, you can always add more paperclips. Does that mean that your preference is always 0% satisfied?

That opens up the question of infinities in ethics, which is a whole other can of worms. There's still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.

For instance, let's imagine an immortal who will live an infinite number of days. We have a choice of letting them have one happy experience per day or twenty happy experiences per day (and he would prefer to have these happy experiences, so both hedonic and preference utilitarians can address this question).

Intuitively, we believe it is much better for him to have twenty happy experiences per day than one. But since he lives an infinite number of days, the total number of happy experiences he has is the same: Infinity.

I'm not sure quite how to factor infinite preferences or infinite happiness. We may have to treat it as finite in order to avoid such problems. But it seems like there should be some intuitive way to do so, in the same way we know that twenty happy experiences per day is better for the immortal than one.

Only if it makes me happy. I'm not a preference utilitarian.

It won't, according to Parfit's stipulations. Of course, if we get out of weird hypotheticals where this guy is the only person on Earth possessing the drug, it would probably make you unhappy to be addicted because you would end up devoting time towards the pursuit of the drug instead of happiness.

I personally place only moderate value on happiness. There are many preferences I have that I want to have satisfied, even if it makes me unhappy. For instance, I usually prefer knowing a somewhat depressing truth to believing a comforting falsehood. And there are sometimes when I deliberately watch a bad, unenjoyable movie because it is part of a series I want to complete, even if I have access to another stand-alone movie that I would be much happier watching (yes, I am one of the reasons crappy sequels exist, but I try to mitigate the problem by waiting until I can rent them).

That opens up the question of infinities in ethics, which is a whole other can of worms. There's still considerable debate about how to deal with it and it creates lots of problems for both preference utilitarianism and hedonic utilitarianism.

With hedonic utilitarianism, you can run into problems with infinite utility, or unbounded utility if it's a distribution that has infinite expected utility. This is just a case of someone else having an unbounded utility function. It seems pretty pathetic to get a paradox because of that.

You're right, thinking more on it, it seems like it's not that hard to avoid a paradox with the following principles.

  • Creating creatures whose utility functions are unbounded, but whose creation would violate the Global Moral Rules of population ethics, is always bad, no matter how satisfied their unbounded utility function is. It is always bad to create paper-clip maximizers, sociopaths, and other such creatures. There is no amount of preference satisfaction that could ever make their creation good.

  • This is true of creating individual desires that violate Global Preferences as well. Imagine if the addict in Parfit's drug example was immortal. I would still consider making him an addict to have made his life worse, not better, even though his preference for drugs is fulfilled infinity times.

  • However, that does not mean that creating an unbounded utility function is infinitely bad in the sense that we should devote infinite resources towards preventing it from occurring. I'm not yet sure how to measure how bad its creation would be, but "how fulfilled it is" would not be the only consideration. This was what messed me up in my previous post.

The point is that the creation of new people preferences that violate Global Preferences always makes the world worse, not better, and should always be avoided. An immortal drug addict is less desirable than an immortal non-addict, and a society of humans is always better than an expanding wave of paperclips.

Yes, but only until they're modified. The desire fulfillment of their future selves will outweigh the desire unfulfillment of their present selves, resulting in a net increase in desire fulfillment.

One way this is typically resolved is something called the "prior existence view." This view considers it good to increase the desire fulfillment of those who already exist, and those who will definitely exist in the future, but does not necessarily grant extra points for creating tons of new desires and then fulfilling them. The prior-existence view would therefore hold that it is wrong to create or severely modify a person if doing so would inflict an unduly large disutility on those who would exist prior to that person's creation.

The prior-existence view captures many strong moral intuitions, such as that it is morally acceptable to abort a fetus to save the life of the mother, and that it is wrong to wirehead someone against their will. It does raise some questions, like what to do if raising the utility of future people will change their identity, but I do not think these are unresolvable.

[-]fare10

Either way, this sounds like the pixie dust theory of happiness: happiness as some magic chemical (one with very short shelf life, though), that you have to synthesize as much as possible of before it decays. I bet you one gazillion dollar the stereo-structure of that chemical is paper-clip shaped.

[-]fare00

This reminds me of similar pixie dust theories of freedom: see my essay at http://fare.tunes.org/liberty/fmftcl.html

In the end, happiness, freedom, etc., are functional sub-phenomena of life, i.e. self-sustaining behavior. Trying to isolate these phenomena from the rest of living behavior, what more to "maximize" them, is absurd on its face - even more so than trying to isolate magnetic monopoles and maximize their intensity.

[-]fare00

What more, massive crime in the name of such a theory is massively criminal. That your theories lead you to consider such massive crime should tip you that your theories are wrong, not that you should deplore your inability to conduct large-scale crime. You remind me of those communist activist cells who casually discuss their dream of slaughtering millions of innocents in concentration camps for the greatness of their social theories. http://www.infowars.com/obama-mentor-wanted-americans-put-in-re-education-camps/

[-]fare00

Happily, the criminal rapture of the overintelligent nerd has little chance of being implemented in our current world, unlike the criminal rapture of the ignorant and stupid masses (see socialism, islamism, etc.). That's why your proposed mass crimes won't happen - though god forbids you convince early AIs of that model of happiness to maximize.

The advantages of invasive methods (electricity) over non-invasive ones (drugs) may have been overstated here.

But it't not important to the subject of the post. If we happen to invent a drug that makes people extremely happy with no side effects except that they are left with no desires or motivations and have to hire others to care for them, happiness utilitarians would still want to force this drug on people, and preference utilitarians would still oppose them.

Probably not - unless they are very keen on short-term thinking.

My usual attitude is that our brains are not unified coherent structures, our minds still less so, and that just because I want X doesn't mean I don't also want Y where Y is incompatible with X.

So the search for some single thing in my brain that I can maximize in order to obtain full satisfaction of everything I want is basically doomed to failure, and the search for something analogous in my mind still more so, and the idea that the former might also be the latter strikes me as pure fantasy.

So I approach these sorts of thought experiments from two different perspectives. The first is "do I live in a world where this is possible?" to which my answer is "probably not." The second is "supposing I'm wrong, and this is possible, is it good?"

That's harder to answer, but if I take seriously the idea that everything I value turns out to be entirely about states of my brain that can be jointly maximized via good enough wireheading, then sure, in that world good enough wireheading is a fine thing and I endorse it.

just because I want X doesn't mean I don't also want Y where Y is incompatible with X

In real life you are still forced to choose between X and Y, and through wireheading you can still cycle between X and Y at different times.

In the example of the child and the coal, what you are trying to do is maximize the child's expected utility.

On topic - I have developed a simple answer to most of these questions: use your own dang brain. If you would prefer poor people to be fed, then work towards that. If you would prefer it if people had wires put in their brains against their will, desire that. If you want people to control their own lives, program it into an AI. But nobody can do the deciding for you - your own preferences are all you can use.

Well, firstly, we do not actually know how it is that an algorithm can feel pleasure (or to be more exact, we have zero clue what so ever how the subjective experience happens), and secondarily, there's no reason to think that all the pleasure an algorithm can feel is reachable via stimulation of some pleasure pathway; the pleasure from watching an interesting movie may have a different component entirely, pleasure arising from complex interaction of neurons. From what I recall, people fitted with wire in the head and a reward button still didn't behave like rodents do.

edit :

Or consider fetuses too young to have preferences. I visit a society where it is common to drink a lot, even when pregnant, and everyone sees it as normal.

what the hell, man. It's the trashiest trashiness. But an awesome point on how either utilitarianism fails, in different ways, to consider this situation correctly. I've a strong suspicion that people come up with some pseudo-logical principles like 'baby does not care' to short circuit built in morals in the name of immediate pleasure, getting conditioned to do this kind of un-reasoning by the rewards from the wrongdoings in absence of internal punishment (guilt), with multitude of systems of (a-)morality sort of evolving via internal and external conditioning via rewards and punishments.

It's the trashiest trashiness

huh?

either utilitarianism fails, in different ways, to consider this situation correctly

Utilitarianism that considers future utility does fine with it. So if for me utility=happiness I can still say that it would be better not to give the baby fetal alcohol syndrome because the happiness from drinking now would be much less than the later FAS-induced suffering.

huh?

Okay. The instrumentally rational way of referring to those people (who drink while pregnant and see nothing wrong with it) as 'complete trash'. So that you don't care a whole lot about them because the caring time is best spent helping those people in third world who don't frigging oppose being helped or sabotage your effort to help their own children.

Utilitarianism that considers future utility does fine with it. So if for me utility=happiness I can still say that it would be better not to give the baby fetal alcohol syndrome because the happiness from drinking now would be much less than the later FAS-induced suffering.

Can you do abortions, though? What's with just enough FAS so that the life of the sufferer still has positive utility? People with Down syndrome are generally very happy, what's about inducing it?

The best thing about utilitarianism is that you can always find a version justifying what ever you want to do anyway, and can always find how this utility would be maximized by doing something that is grossly wrong, if you don't like it any more.

People with Down syndrome are generally very happy, what's about inducing it?

Don't quite follow - you mean, 'Would it be ethical to induce Down syndrome, given that people with Down syndrome are often very happy?'

Well, maybe. On the other hand, my impression is that as much as caregivers may want to deny it, a Down child imposes major costs on everyone around them. Inducing high IQ would not be obviously worse even in the cases where they flame out, would be a lot cheaper, and would pay for itself in inventions and that sort of thing. So there are lots of better alternatives to Down's, and given a limited population, the optimal number of Down syndrome may be zero.

What's with just enough FAS so that the life of the sufferer still has positive utility?

Aside from all this being hard to measure, you don't usually care about absolute levels of utility so much as differences in predicted utility between choices. Say you're choosing between:

a) Avoid drinking while pregnant: baby doesn't get FAS, you don't get to drink

b) Drink some during pregnancy: baby probably gets mild FAS, you don't have to give up drinking

c) Drink lots during pregnancy: baby probably gets FAS, you get to keep drinking as much as you like

In saying "still has positive utility" you're comparing (b) with a different choice "(d) have an abortion". The right comparison is with (a) and (c): the things you were considering doing instead of (b). I suspect (a) has the highest utility because the benefit of drinking is probably much lower than the harm of FAS.

you can always find a version justifying what ever you want to do

Only if you change your moral system before each act.

drink while pregnant and see nothing wrong with it: complete trash

I brought up this hypothetical group as an illustration of the failure of a current-preferences utilitarianism.

In saying "still has positive utility" you're comparing (b) with a different choice "(d) have an abortion". The right comparison is with (a) and (c): the things you were considering doing instead of (b). I suspect (a) has the highest utility because the benefit of drinking is probably much lower than the harm of FAS.

then you can't have abortion either, and not only that but you got to go and rally against abortions. While you're at it, against contraceptives also.

The utilitarianism is unusable for determining actions. It is only usable for justifications. Got an action, pick an utilitarianism, pick a good sounding utility, and you have yourself a very noble goal towards which your decision works.

then you can't have abortion either, and not only that but you got to go and rally against abortions. While you're at it, against contraceptives also.

Let's assume you're a total hedonistic utilitarian: you want to maximize happiness over all people over all time. Naively, yes, abortions and contraception would decrease total all-time happiness because they decrease the number of future people. But prohibiting abortions and contraception also has negative effects on other people; lots of people get pregnant and really don't want to have a baby. When abortion is illegal you have back-alley abortions which have a high chance of killing the woman. Even if we assume that the harm of requiring people to go through with unwanted pregnancies is minor and that no women will get abortions anyway, there's still the question of whether an additional person will increase total happiness. It's possible that the happiness of that particular extra person would be less than the distributed unhappiness caused by adding another person to a highly populated world.

Whether abortions or contraception prohibition is good public policy depends on its effects, which are not entirely known. They look negative to me, but I'm not that sure. We can increase our chances of making the right choice with more research, but I don't see this as understood enough for pro-life or pro-choice advocacy to be a good use of my time.

Got an action, pick an utilitarianism, pick a good sounding utility, and you have yourself a very noble goal towards which your decision works.

This isn't how to use a moral system. Any moral system can be abused if you're acting in bad faith.

re: the unknown effects, yes, what ever you want to do you can always argue for in utilitarianism, because partial sums.

This isn't how to use a moral system. Any moral system can be abused if you're acting in bad faith.

Some moral systems are impossible to use for anything but rationalization. Utilitarianism is a perfect example of such.

the luddites whom would take my wire away should be put in holodecks while alseep where they can live out their sadistic fantasies of denial of pleasure without affecting me.

nit: use archaic case forms correctly or not at all. There you should have 'who'.

That and a T.

I'm a narcissist, so I'm actually the subject of the above sentence :p

Maybe there's a joke I'm not getting here, but it should be "who", not "whom", because it's the grammatical subject of "would take".

Edit: Bonus fun link -- "whom" used as a subject on a protest sign.