(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)
The Omega Corporation
Internal Memorandum
To: Omega, CEO
From: Gamma, Vice President, Hedonic Maximization
Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.
Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.
Extensive market research has succeeded only at baffling our researchers. People have even refused free trials of the device. Our researchers explained to them in perfectly clear terms that their current position is misinformed, and that once they tried the MBLS, they would never want to return to their own lives again. Several survey takers went so far as to specify that statement as their reason for refusing the free trial! They know that the MBLS will make their life so much better that they won't want to live without it, and they refuse to try it for that reason! Some cited their "utility" and claimed that they valued "reality" and "actually accomplishing something" over "mere hedonic experience." Somehow these organisms are incapable of comprehending that, inside the MBLS simulator, they will be able to experience the feeling of actually accomplishing feats far greater than they could ever accomplish in real life. Frankly, it's remarkable such people amassed enough credits to be able to afford our products in the first place!
You may recall that a Beta version had an off switch, enabling users to deactivate the simulation after a specified amount of time, or could be terminated externally with an appropriate code. These features received somewhat positive reviews from early focus groups, but were ultimately eliminated. No agent could reasonably want a device that could allow for the interruption of its perfect life. Accounting has suggested we respond to slack demand by releasing the earlier version at a discount; we await your input on this idea.
Profits aside, the greater good is at stake here. We feel that we should find every customer with sufficient credit to purchase this device, forcibly install them in it, and bill their accounts. They will immediately forget our coercion, and they will be many, many times happier. To do anything less than this seems criminal. Indeed, our ethics department is currently determining if we can justify delaying putting such a plan into action. Again, your input would be invaluable.
I can't help but worry there's something we're just not getting.
I think most people agree about the importance of "the substrate universe" whether that universe is this one, or actually higher than our own. But suppose the we argued against a more compelling reconstruction of the proposal by modifying the experience machine in various ways? The original post did the opposite of course - removing the off button in a gratuitous way that highlights the loss (rather than extension) of autonomy. Maybe if we repair the experience box too much it stops functioning as the same puzzle, but I don't see how an obviously broken box is that helpful an intuition pump.
For example, rather than just giving me plain old physics inside the machine, the Matrix experience of those who knew they were in the matrix seemed nice: astonishing physical grace, the ability to fly and walk on walls, and access to tools and environments of one's choosing. Then you could graft on the good parts from Diaspora so going into the box automatically comes with effective immortality, faster subjective thinking processes, real time access to all the digitally accessible data of human civilization, and the ability to examine and cautiously optimize the algorithms of one's own mind using an “exoself” to adjust your “endoself” (so that you could, for example, edit addictions out of your psychological makeup except when you wanted to go on a “psychosis vacation”).
And I'd also want to have a say in how human civilization progressed. If there were environmental/astronomical catastrophes I'd want to make sure they were either prevented or at least that people's simulators were safely evacuated. If we could build the kinds of simulators I'm talking about then people in simulators could probably build and teleoperate all kinds of neat machinery for emergencies, repair of the experience machines, space exploration, and so on.
Another argument against experience machines is sometimes that they wouldn't be as "challenging" as the real world because you'd be in a “merely man made” world... but the proper response is simply to augment the machine so that it offers more challenges and more meaningful challenges than mere reality - for example, the environments you could call up to give you arbitrary levels of challenge might be calibrated to be "slightly beyond your abilities about 50% of the time but always educational and fun".
Spending time in one of these improved experience machines would be way better than, say, spending the equivalent time in college, because mere college graduates would pale in comparison to people who'd spent the same four years gaining subjective centuries of hands on experience dealing with issues whose "challenge modes" were vastly more complex puzzles than most of the learning opportunities on our boring planet. Even for equivalent subjective time, I think the experience machines would be better, because they'd be calibrated precisely to the person with no worries about educational economies of scale... instead of lectures, conversations... instead of case studies, simulations... and so on.
The only intelligible arguments against the original "straw man" experience machine (though perhaps there are others I'm not clever enough to notice) that remain compelling to me after repairing the design of the machine, are focused on social relationships.
First, one of the greatest challenges in the human environment is other humans. If you're setting up an experience machine scenario with a sliding scale of challenge, where to you get the characters from? Do you just "fabricate" the facade of a someone who presents a particular kind of coordination challenge due to their difficult personal quirks? If you're going to simulate conflict, do you just "fabricate" enemies? And hurt them? Where do all these people come from and what is the moral significance of their existence? Not being distressed by this is probably a character defect, but the alternative seems to involve inevitable distress.
And then on the other side of the coin, there are many people who I love as friends or family, even though they are not physically gorgeous, fully self actualized, passionately moral, polymath "greek gods". Which is probably a lucky thing, because neither am I :-P
But if they refused to enter repaired experience machines (networked, of course, so we could hang out anytime we wanted) the only way I could interact with them would be through an avatar in the substrate world where they were plodding along without the same growth opportunities. Would I eventually see them as grossly incapacitated caricatures of what humans are truly capable of? How much distress would that cause? Or suppose they opted in and then got vastly more out of their experience machine than I got out of mine? Would I feel inferior? Would I need to be protected from the awareness of my inferiority for my own good? Would they feel sorry for me? Would they need to be protected from my disappointing-ness? Would we all just drift apart, putting "facade interfaces" between each other, so everyone's understanding of other people drifted farther and farther out of calibration - me appearing better than actual to them and them worse than actual to me?
And then if something in the external universe supporting our experience machines posed a real challenge that involves actual choices we're back to the political challenges around coordinating with other people where the stakes are authentic and substantial. We'd probably debate from inside the experience boxes about what the world manipulation machines should do, and the arguments would inevitably carry some measure of distress for any "losing factions".
It is precisely the existence of morally significant "non-me entities" that creates challenges that I don't see how to avoid under any variety of experience machine. It's not that I particularly care whether my desk is real or not - its that I care that my family is real.
Given the state of human technology, one could argue that human civilization (especially in the developed world, and hopefully for everyone within a few decades) is already in something reasonably close to an optimal experience machine. We have video games. We have reasonable material comfort. We have raw NASA data online. We can cross our fingers and somewhat reasonably imagine technology improving medical care to cure death and stupidity... But the thing we may never have a solution to is the existence of people we care about, who are not exactly as they would be if their primary concern was our own happiness, while recognizing we are constrained in similar ways, especially when we care about multiple people who want different things for us.
Perhaps this is where we cue Sartre's version of a "three body problem"?
Unless... what if much of the challenges in politics and social interactions happen because people in general are so defective? If my blindnesses and failures compound against those of others, it sounds like a recipe for unhappiness to me. But if experience machines could really help us to become more the kind of people we wanted to be, perhaps other people would be less hellish after we got the hang of self improvement?
I think you missed the bit where the machine gives you a version of your life that's provably the best you could experience. If that includes NASA and vast libraries then you get those.