I think you are missing the point.
First throw out the FAI part of this argument; we can consider an FAI just as a tool to help us achieve our goals. Any AI which does not do at least this this is insufficiently friendly (and thus counts as a paperclipper, possibly).
Thus, the actual question is what are our goals? I don't know about you, but I value understanding and exploration. If you value pleasure, good! Have fun being a wirehead.
It comes down to the fact that a world where everyone is a wirehead is not valued by me or probably by many people. Even though this world would maximize pleasure, it wouldn't maximize utility of people designing the world (I think this is the util/hedon distinction, but I am not sure). If we don't value that world, why should we create it, even if we would value after we create it?
What I'm observing in the various FAI debates is a tendency of people to shy away from wire-heading as something the FAI should do. This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want". This is not, however, clear to me at all.
I don't want that. There, did I make it clear?
If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them.
Since when does shut-up-and-multiply mean "multiply utility by number of beings"?
If you don't want to be "reduced" to an eternal state of bliss, that's tough luck.
Heh heh.
denis, most utilitarians here are preference utilitarians, who believe in satisfying people's preferences, rather than maximizing happiness or pleasure.
To those who say they don't want to be wireheaded, how do you really know that, when you haven't tried wireheading? An FAI might reason the same way, and try to extrapolate what your preferences would be if you knew what it felt like to be wireheaded, in which case it might conclude that your true preferences are in favor of being wireheaded.
The experience is as good as can possibly be.
You don't know how good "as good as can possibly be" is yet.
I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.
But surely the cost in happiness that you're willing to accept isn't infinite. For example, presumably you're not willing to be tortured for a year in exchange for a year of thinking and doing stuff. Someone who has never experienced much pain might think that torture is no big deal, and accept this exchange, but he would be mistaken, right?
How do you know you're not similarly mistaken about wireheading?
How do you know you're not similarly mistaken about wireheading?
I'm a bit skeptical of how well you can use the term "mistaken" when talking about technology that would allow us to modify our minds to an arbitrary degree. One could easily fathom a mind that (say) wants to be wireheaded for as long as the wireheading goes on, but ceases to want it the moment the wireheading stops. (I.e. both prefer their current state of wireheadedness/non-wireheadedness and wouldn't want to change it.) Can we really say that one of them is "mistaken", or wouldn't it be more accurate to say that they simply have different preferences?
EDIT: Expanded this to a top-level post.
One reason I might not want to become a ball of ecstasy is that I don't really feel it's ME. I'm not even sure it's sentient, since it doesn't learn, communicate, or reason, it just enjoys.
I just so happened to read Coherent Extrapolated Volition today. Insofar as this post is supposed to be about "what an FAI should do" (rather than just about your general feeling that objections to wire-heading are irrational), it seems to me that this post all really boils down to navel-gazing once you take CEV into account. Or in other words, this post isn't really about FAI at all.
A number of people mention this one way or another, but an explicit search for "local maximum" doesn't match any specific comment - so I wanted to throw it out here.
Wireheading is very likely to put oneself in a local maximum of bliss. Though a wirehead may not care or even ponder about whether or not there exist greater maxima, it's a consideration that I'd take into account prior to wiring up.
Unless one is omniscient, the act of a permanent (-ish) state of wireheading means foregoing the possibility of discovering a greater point of wireheaded happiness.
I guess the very definition of wireheadedness bakes in the notion that you wouldn't care about that anymore - good for those taking the plunge and hooking up I suppose. Personally, the universe would have to throw me an above average amount of negative derivates before I'd say enough is enough, screw potential for higher maxima, I'll take this one...
If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them.
Rewritting that: if you are an altruistic, total utilitarian whose utility function includes only hedonistic pleasure and with no birth-death asymmetry, then the correct thing for the FAI to do is...
If you're considering the ability to rewire one's utility function, why simplify the function rather than build cognitive tools to help people better satisfy the function? What you've proposed is that an AI destroys human intelligence, then pursues some abstraction of what it thinks humans wanted.
Your suggestion is that an AI might assume that the best way to reach its goal of making humans happy (maximizing their utility) is to attain the ends of humans' functions faster and better than we could, and rewire us to be satisfied. There are two problems here that I see.
First, the means are an end. Much of what we value isn't the goal we claim as our objective, but the process of striving for the goal. So here you have an AI that doesn't really understand what humans want.
Second, most humans aren't interested in every avenue of exploration, creation and pleasure. Our interests are distinct. They also do change over time (or some limited set of parameters do anyhow). We don't always notice them change, and when they do, we like to track down what decisions they made that led them to their new preferences. People value the (usually illusory) notion that they control changes to their util...
I think people shy away from wireheading because a future full of wireheads would be very boring indeed. People like to think there's more to existence than that. They wan't to experience something more interesting than eternal pleasure. And that's exactly what an FAI should allow.
If all the AI cares about is the utility of each being times the number of beings, and is willing to change utility functions to get there, why should it bother with humans? Humans have all sorts of "extra" mental circuitry associated with being unhappy, which is just taking up space (or computer time in a simulator). Instead, it makes new beings, with easily satisfied utility functions and as little extra complexity as possible.
The end result is just as unFriendly, from a human perspective, as the naive "smile maximizer".
Utility is usefulness to people. If there are no humans there is no utility.
Utility is goodness measured according to some standard of goodness; that standard doesn't have to reference human beings. In my most optimistic visions of a far future, human values outlive the human race.
Even from a hedonistic perspective, 'Shut up and multiply' wouldn't necessarily equate to many beings experiencing pleasure.
It could come out to one superbeing experiencing maximal pleasure.
Actually, I think thinking this out (how big are entities, who are they?) will lead to good reasons why wireheading is not the answer.
Example: If I'm concerned about my personal pleasure, then maximizing the number of agents isn't a big issue. If my personal identity is less important than total pleasure maximizing, then I get killed and converted to orgasmium (be it one being or many). If my personal identity is more important... well, then we're not just multiplying hedons anymore.
This is a very relevant post for me because I've been asking these questions in one form or another for several months. A framework of objective value (FOV) seems to be precluded by physical materialism. However, without it, I cannot see any coherent difference between being happy (or satisfied) because of what is going on in a simulation and what is going on in reality. Since value (that is, our personal, subjective value) isn't tied to any actual objective good in the universe, it doesn't matter to our subjective fulfillment if the universe is modified ...
This is one of the most horrifying things I have ever read. Most of the commenters have done a good just of poking holes in it, but I thought I'd add my take on a few things.
This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want".
Some good and detailed explanations are here, here, here, here, and here.
...If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of b
If we take for granted that an AI that is friendly to all potential creatures is out of the question - that the only type of FAI we really want is one that's just friendly to us - then the following is the next issue I see.
If we all think it's so great to be autonomous, to feel like we're doing all of our own work, all of our own thinking, all of our own exploration - then why does anyone want to build an AI in the first place?
Isn't the world, as it is, lacking an all-powerful AI, perfectly suited to our desires of control and autonomy?
Suppose an AI-friend...
It is worth noting that in Christian theology, heaven is only reached after death, and both going there early and sending people there early are explicitly forbidden.
While an infinite duration of bliss has very high utility, that utility must be finite, since infinite utility anywhere causes things to go awry when handling small probabilities of getting that utility. It is also not the only term in a human utility function; living as a non-wirehead for awhile to collect other types of utilons and then getting wireheaded is better than getting wireheaded immediately. Therefore, it seems like the sensible thing for a FAI to do is to offer wireheading as an option, but not to force the issue except in cases of imminent death.
This reminds me of a thought I had recently - whether or not God exists, God is coming - as long as humans continue to make technological progress. Although we may regret it (for one, brief instant) when he gets here. Of course, our God will be bound by the laws of the universe, unlike the Theist God.
The Christian God is an interesting God. He's something of a utilitarian. He values joy and created humans in a joyful state. But he values freedom over joy. He wanted humans to be like himself, living in joy but having free will. Joy is beautiful to him, but...
So this begs(?) the question: Our brains pleasure circuitry is the ultimate arbitrator on whether an action is "good" [y/N]?
I would say that our pleasure centre is, like our words-feel-like-meaningful, our map-feels-like-territory, our world-feels-agent-driven, our consciousness-feels-special, etc. It is a good enough evolutionary heuristic that made our ancestors survive to breed us.
I am at this point tempted to shout "Stop the presses, pleasure isn't the ultimate good!" Yes, wire heading is of course the best way to fulfil that little feeling-good part of your brain. Is it constructive? Meh.
I would trade my pleasure centre for intuitive multiplication any day.
What if we could create a wirehead that made us feel as though we were doing 1 or 2? Would that be satisfactory to more people?
The idea of wireheading violates my aesthetic sensibilities. I'd rather keep experiencing some suffering on my quest for increasing happiness, even if my final subjective destination were the same as that of a wirehead, which I doubt, because I consider my present path as important as the end goal. I doubt value and morality can be fully deconstructed through reason.
How is wireheading different from this http://i.imgur.com/wKpLx.jpg ? I think James Hughes makes a very good case for what is wrong with current transhumanist thought in his 'Problems of Transh...
I'll just comment on what most people are missing, because most reactions seem to be missing a similar thing.
Wei explains that most of the readership are preference utilitarians, who believe in satisfying people's preferences, not maximizing pleasure.
That's fine enough, but if you think that we should take into account the preferences of creatures that could exist, then I find it hard to imagine that a creature would prefer not to exist, than to exist in a state where it permanently experiences amazing pleasure.
Given that potential creatures outnumber exis...
...If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them. This enjoyment could be of any type - it could be explorative or creative or hedonic enjoyment as we know it. The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly. Therefore, the greatest utility will be achieved by wire-heading.
I can conceive of the following 3 main types of meaning we can pursue in life.
1. Exploring existing complexity: the natural complexity of the universe, or complexities that others created for us to explore.
2. Creating new complexity for others and ourselves to explore.
3. Hedonic pleasure: more or less direct stimulation of our pleasure centers, with wire-heading as the ultimate form.
What I'm observing in the various FAI debates is a tendency of people to shy away from wire-heading as something the FAI should do. This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want". This is not, however, clear to me at all.
The utility we get from exploration and creation is an enjoyable mental process that comes with these activities. Once an FAI can rewire our brains at will, we do not need to perform actual exploration or creation to experience this enjoyment. Instead, the enjoyment we get from exploration and creation becomes just another form of pleasure that can be stimulated directly.
If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them. This enjoyment could be of any type - it could be explorative or creative or hedonic enjoyment as we know it. The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly. Therefore, the greatest utility will be achieved by wire-heading. Everything else falls short of that.
What I don't quite understand is why everyone thinks that this would be such a horrible outcome. As far as I can tell, these seem to be cached emotions that are suitable for our world, but not for the world of FAI. In our world, we truly do need to constantly explore and create, or else we will suffer the consequences of not mastering our environment. In a world where FAI exists, there is no longer a point, nor even a possibility, of mastering our environment. The FAI masters our environment for us, and there is no longer a reason to avoid hedonic pleasure. It is no longer a trap.
Since the FAI can sustain us in safety until the universe goes poof, there is no reason for everyone not to experience ultimate enjoyment in the meanwhile. In fact, I can hardly tell this apart from the concept of a Christian Heaven, which appears to be a place where Christians very much want to get.
If you don't want to be "reduced" to an eternal state of bliss, that's tough luck. The alternative would be for the FAI to create an environment for you to play in, consuming precious resources that could sustain more creatures in a permanently blissful state. But don't worry; you won't need to feel bad for long. The FAI can simply modify your preferences so you want an eternally blissful state.
Welcome to Heaven.