I can conceive of the following 3 main types of meaning we can pursue in life.
1. Exploring existing complexity: the natural complexity of the universe, or complexities that others created for us to explore.
2. Creating new complexity for others and ourselves to explore.
3. Hedonic pleasure: more or less direct stimulation of our pleasure centers, with wire-heading as the ultimate form.
What I'm observing in the various FAI debates is a tendency of people to shy away from wire-heading as something the FAI should do. This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want". This is not, however, clear to me at all.
The utility we get from exploration and creation is an enjoyable mental process that comes with these activities. Once an FAI can rewire our brains at will, we do not need to perform actual exploration or creation to experience this enjoyment. Instead, the enjoyment we get from exploration and creation becomes just another form of pleasure that can be stimulated directly.
If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them. This enjoyment could be of any type - it could be explorative or creative or hedonic enjoyment as we know it. The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly. Therefore, the greatest utility will be achieved by wire-heading. Everything else falls short of that.
What I don't quite understand is why everyone thinks that this would be such a horrible outcome. As far as I can tell, these seem to be cached emotions that are suitable for our world, but not for the world of FAI. In our world, we truly do need to constantly explore and create, or else we will suffer the consequences of not mastering our environment. In a world where FAI exists, there is no longer a point, nor even a possibility, of mastering our environment. The FAI masters our environment for us, and there is no longer a reason to avoid hedonic pleasure. It is no longer a trap.
Since the FAI can sustain us in safety until the universe goes poof, there is no reason for everyone not to experience ultimate enjoyment in the meanwhile. In fact, I can hardly tell this apart from the concept of a Christian Heaven, which appears to be a place where Christians very much want to get.
If you don't want to be "reduced" to an eternal state of bliss, that's tough luck. The alternative would be for the FAI to create an environment for you to play in, consuming precious resources that could sustain more creatures in a permanently blissful state. But don't worry; you won't need to feel bad for long. The FAI can simply modify your preferences so you want an eternally blissful state.
Welcome to Heaven.
I'll just comment on what most people are missing, because most reactions seem to be missing a similar thing.
Wei explains that most of the readership are preference utilitarians, who believe in satisfying people's preferences, not maximizing pleasure.
That's fine enough, but if you think that we should take into account the preferences of creatures that could exist, then I find it hard to imagine that a creature would prefer not to exist, than to exist in a state where it permanently experiences amazing pleasure.
Given that potential creatures outnumber existing creatures many times over, the preferences of existing creatures - that we wish to selfishly keep the universe's resources to ourselves, so we can explore and think and have misguided lofty impressions about ourselves, and whatnot - all of those preferences don't count that much in the face of many more creatures that would prefer to exist, and be wireheaded, than not to exist at all.
The only way preference utilitarianism can avoid the global maximum of Heaven is to ignore the preferences of potential creatures. But that is selfish.
If you don't want Heaven, then you don't want a universally friendly AI. What you really want is an AI that is friendly just to you.
I doubt anyone here acts in a manner remotely similar to the way utilitarianism recommends. Utilitarianism is an unbiological conception about how to behave - and consequently is extremely difficult for real organisms to adhere to. Real organisms frequently engage in activities such as nepotism. Some people pay lip service to utilitarianism because it sounds nice and signals a moral nature - but they don't actually adhere to it.