We're probably going to develop the technology to directly produce pleasure in the brain with electrical stimulation. We already do this to some extent, though with the goal of restoring people to normal function. This poses a similar question to drugs, but potentially without the primary downsides: wireheading may cause intense undiminising pleasure. [1]
Like many people my first reaction to the idea is negative. Joy from wireheading strikes me as much less valuable than joy from social interaction, flow, warm fuzzies, or music. But perhaps we discount pleasure from drugs only because their overall effect on people's lives tends to be negative, and there's not actually anything lesser about that kind of happiness. If there's some activity in the brain that corresponds to the joy I value, direct stimulation should if anything be better: we can probably get much more pure, much more intense pleasure with careful application of electricity.
Maybe wireheading would grow into a role as a kind of special retirement for rich people: once you've saved enough money to pay other people to take care of your physical needs, you plug in and spend the rest of your years euphorically. Perhaps there's a cycle of increasing popularity and decreasing price. If it became cheap enough and made people happy enough, a charity to provide this for people who couldn't otherwise afford it might be more cost-effective at increasing happiness than even the best ones trying to reduce suffering elsewhere.
Even then, there's something dangerous and strange about a technology that could make people happy even if they didn't want it to. If what matters is happiness it's nearly unimportant whether someone wants to wirehead; even if they hate the idea, the harm of forcing it on them would be much less than the benefit of them being really happy for the rest of their life. Imagine it becomes a pretty standard thing to do at age 30, after fifteen years of hard work to save up the money, but a few holdouts reject wireheading and want to stay unstimulated. A government program to put wires in people's brains and pleasurably stimulate them against their will sounds like dystopic science fiction, but could it be the right thing to do? Morally obligatory, perhaps? Even after accounting for the side effect where other people are unhappy and upset about it?
Even if I accept the pleasure of wireheading as legitimate, I find the idea of forcing it upon people over their objections repellant. Maybe there's something essentially important about preferences? Instead of trying to maximize joy and minimize suffering, perhaps I should be trying to best satisfy people's perferences? [2] In most cases valuing preference satisfaction is indistinguishable from valuing happiness: I would prefer to eat chocolate over candy because I think the chocolate would make me happier. I prefer outcomes where I am happier in general, but because I don't like the idea of being forced to do things even if I agree they would make me happier, valuing preferences seems reasonable.
Preferences and happiness don't always align, however. Consider a small child sitting in front of a fire. They point at a coal and say "Want! Want!", being very insistent. They have a strong preference for you to give them the coal, but of course if you do they will experience a lot of pain. Clearly you shouldn't give it to them. Parenting is full of needing to consider the child's long term best interest over their current preferences.
Or consider fetuses too young to have preferences. I visit a society where it is common to drink a lot, even when pregnant, and everyone sees it as normal. Say they believe me when I describe the effects of large quantities of alcohol on fetal development but reject my suggestion that they reduce their consumption: "the baby doesn't care." A fetus early enough in its development not to be capable of preferring anything can still be changed by alcohol. It seems wrong to me to ignore the future suffering of the child on the grounds that it currently has no preferences. [3]
In regard to death these again disagree. Previously it seemed to me that death was bad in that it can be painful to the person, sorrowful for those left behind, and tragic in cutting short a potentially joyful life. But if preferences are what matter then death is much worse: many people have very strong preferences not to die.
I'm not sure how to reconcile this. Neither preferences nor joy/suffering seem to give answers consistent with my intuitions. (Not that I trust them much.) Both come close, though I think the latter comes closer. Treating wireheading-style pleasure as being a lesser kind than real well-being might do it, but if people have strong preferences against something that would give them true happiness there's still an opening for a very ugly paternalism, and I don't see any real reason to discount pleasure from electrical stimulation. Another answer would be that value is more complex than either preferences or happiness and that I just don't fully understand it yet. But when happiness comes so close to fitting I have to consider that it may be right and the ways a value system grounded on happiness differ from my intuitions are problems with my intutions.
(I also posted this on my blog)
[1] Yvain points out that wireheading experiments may have been stimulating desire instead pleasure. This would mean you'd really want to get more stimulation but aren't actually enjoying it. Still, it's not a stretch to assume that we can figure out what we're stimulating and go for pleasure, or perhaps both pleasure and desire.
[2] Specifically, current preferences. If I include future preferences then we could just say that while someone currently doesn't want to be wireheaded, after it is forced upon them and they get a taste of it they may have an even stronger preference not to have the stimulation stop.
[3] Future people make this even stronger. The difference between a future with 10 billion simultaneous happy people and one with several orders of magnitude more seems very important, even though they don't currently exist to have preferences.
Okay, I've been reading a bit more on this and I think I have found an answer from Derek Parfit's classic "Reasons and Persons." Parfit considers this idea in the section "What Makes Someone's Life Go Best," which can be found online here, with the relevant stuff starting on page 3.
In Parfit's example he considers a person who argue that he is going to make your life better by getting you addicted to a drug which creates an overwhelming desire to take it. The drug has no other effects, it does not cause you to feel high or low or anything like that, all it does is make you desire to take it. After getting you addicted this person will give you a lifetime supply of the drug, so you can always satisfy your desire for it.
Parfit argues that this does not make your life better, even though you have more satisfied desires than you used to. He defends this claim by arguing that, in addition to our basic desires, we also have what he calls "Global Preferences." These are 2nd level meta-preferences, "desires about desires." Adding or changing someone's desires is only good if it is in accordance with their "Global Preferences." Otherwise, adding to or changing their desires is bad, not good, even if the new desires are more satisfied than their old ones.
I find this account very plausible. It reminds me of Yvain's posts on Wanting, Liking, and Approving.
Parfit doesn't seem to realize it, but this theory also provides a way to reject his Mere Addition Paradox. In the same way that we have global preferences about what our values are, we can also have global moral rules about what amount and type of people it is good to create. This allows us to avoid both the traditional Repugnant Conclusion, and the far more repugnant conclusion that we ought to kill the human race and replace them with creatures whose preferences are easier to satisfy.
Now, you might ask, what if, when we wirehead someone, we change their Global Preferences as well, so they now globabally prefer to be wireheaded? Well, for that we can invoke our global moral rules about population ethics. Creating a creature with such global preferences, under such circumstances, is always bad, even if it lives a very satisfied life.
Are you saying that preferences only matter if they're in line with Global Preferences?
Before there was life, there was no Global Preferences, which means that no new life had preferences in accordance with these Global Preferences, therefore no preferences matter.