(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)

The Omega Corporation
Internal Memorandum
To: Omega, CEO
From: Gamma, Vice President, Hedonic Maximization

Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.

Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.

Extensive market research has succeeded only at baffling our researchers. People have even refused free trials of the device. Our researchers explained to them in perfectly clear terms that their current position is misinformed, and that once they tried the MBLS, they would never want to return to their own lives again. Several survey takers went so far as to specify that statement as their reason for refusing the free trial! They know that the MBLS will make their life so much better that they won't want to live without it, and they refuse to try it for that reason! Some cited their "utility" and claimed that they valued "reality" and "actually accomplishing something" over "mere hedonic experience." Somehow these organisms are incapable of comprehending that, inside the MBLS simulator, they will be able to experience the feeling of actually accomplishing feats far greater than they could ever accomplish in real life. Frankly, it's remarkable such people amassed enough credits to be able to afford our products in the first place!

You may recall that a Beta version had an off switch, enabling users to deactivate the simulation after a specified amount of time, or could be terminated externally with an appropriate code. These features received somewhat positive reviews from early focus groups, but were ultimately eliminated. No agent could reasonably want a device that could allow for the interruption of its perfect life. Accounting has suggested we respond to slack demand by releasing the earlier version at a discount; we await your input on this idea.

Profits aside, the greater good is at stake here. We feel that we should find every customer with sufficient credit to purchase this device,  forcibly install them in it, and bill their accounts. They will immediately forget our coercion, and they will be many, many times happier. To do anything less than this seems criminal. Indeed, our ethics department is currently determining if we can justify delaying putting such a plan into action. Again, your input would be invaluable.

I can't help but worry there's something we're just not getting.

A Much Better Life?
New Comment
174 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]avalot330

I don't know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky's post on Normal Cryonics... if in reverse.

Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It's made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.

Once you notice a cultural bias, it's not too hard to change your reasoned opinion... but the bias usually piggy-backs on a deep-seated reptilian reaction. I find changing that reaction to be harder work.

All this to say that in the case of this tale, and of Eliezer's lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).

Knowing that we will suffer, and knowing that we will die, are unbearable thoughts. We invest an enormous amount of energy toward dealing with the certainty of death and of suffering, as individuals, families, social groups, nations. Worlds in which we would not have to die, or not have to suffer, are worlds for which we have no useful skills or tools. Especia... (read more)

That was eloquent, but... I honestly don't understand why you couldn't just sign up for cryonics and then get on with your (first) life. I mean, I get that I'm the wrong person to ask, I've known about cryonics since age eleven and I've never really planned on dying. But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. Add the uncertain prospect of immortality and... not a whole lot changes so far as I can tell.

There's all the people who believe in Heaven. Some of them are probably even genuinely sincere about it. They think they've got a certainty of immortality. And they still walk on two feet and go to work every day.

[-]Shae110

"But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. "

Hm. I don't see this at all. I see people planning college, kids, a career they can stand for 40 years, retirement, nursing care, writing wills, buying insurance, picking out cemetaries, all in order, all in a march toward the inevitable. People often talk about whether or not it's "too late" to change careers or buy a house. People often talk about "passing on" skills or keepsakes or whatever to their children. Nearly everything we do seems like an adaptation to death to me.

People who believe in heaven believe that whatever they're supposed to do in heaven is all cut out for them. There will be an orientation, God will give you your duties or pleasures or what have you, and he'll see to it that they don't get boring, because after all, this is a reward. And unlike in Avalot's scenerio, the skills you gained in the first life are useful in the second, because God has been guiding you and all that jazz. There's still a progression of birth to fufilment. (I say this as an ex-afterlife-believer).

On the other hand, many vampire and ... (read more)

2sk
Most of the examples you stated have to do more with people fearing a "not so good life" - old age, reduced mental and physical capabilities etc., not necessarily death.
0Shae
Not sure what you're responding to. I never said anything about fearing death nor a not-so-good life, only immortality. And my examples (jadedness, boredom) have nothing to do with declining health.
2shiftedShapes
Aside from all of the questions as to the scientific viability of resurrection through cryonics. I question the logistics of it. What assurance do you have that a cryonics facility will be operational long enough to see your remains get proper treatment? Or furthermore what recourse is there if the facility and the entity controlling it does in fact survive that it will provide the contracted services? If the facility has no legal liability might it not rationally choose to dispose of cryonically preserved bodies/individuals rather than reviving them. I know that there is probably a a page somewhere explaining this, if so please feel free to provide in lieu of responding in depth.
[-]Jordan150

There are no assurances.

You're hanging off a cliff, on the verge of falling to your death. A stranger shows his face over the edge and offers you his hand. Is he strong enough to lift you? Will you fall before you reach his hand? Is he some sort of sadist that is going to push you once you're safe, just to see your look of surprise as you fall?

The probabilities are different with cryonics, but the spirit of the calculation is the same. A non-zero chance of life, or a sure chance of death.

-5shiftedShapes
9Eliezer Yudkowsky
Um... first of all, you've got a signed contract. Second, they screw over one customer and all their other customers leave. Same as for any other business. Focusing on this in particular sounds like a rationalization of a wiggy reaction.
9orthonormal
The more reasonable question is the first one: do you think it's likely that your chosen cryonics provider will remain financially solvent until resuscitation becomes possible? I think it's a legitimate concern, given the track record of businesses in general (although if quantum immortality reasoning applies anywhere, it has to apply to cryonic resuscitation, so it suffices to have some plausible future where the provider stays in business— which seems virtually certain to be the case).
4Paul Crowley
It's not the business going bust you have to worry about, it's the patient care trust. My impression is that trusts do mostly last a long time, but I don't know how best to get statistics on that.
1shiftedShapes
yes there are a lot of issues. Probably the way to go is to look for a law review article on the subject. Someone with free lexis-nexis (or westlaw) could help here. cryonics is about as far as you can get from a plain vanilla contractual issue. If you are going to invest a lot of money in it I hope that you investigate these pitfalls before putting down your cash Eliezer.

I'm not Eliezer.

I have been looking into this at some length, and basically it appears that no-one has ever put work into understanding the details and come to a strongly negative conclusion. I would be absolutely astonished (around +20db) if there was a law review article dealing with specifically cryonics-related issues that didn't come to a positive conclusion, not because I'm that confident that it's good but because I'm very confident that no critic has ever put that much work in.

So, if you have a negative conclusion to present, please don't dash off a comment here without really looking into it - I can already find plenty of material like that, and it's not very helpful. Please, look into the details, and make a blog post or such somewhere.

1shiftedShapes
I know you're not Eliezer, I was addressing him because I assumed that he was the only one who had or was considering paying for cryonics here. This site is my means of researching cryonics as I generally assume that motivated intelligent individuals such as yourselves will be equiped with any available facts to defend your positions. A sort of efficient information market hypothesis. I also assume that I will not receive contracted services in situations where I lack leverage. This leverage could be litigation with a positive expected return or even better the threat of nonpayment. In the instance of cryonics all payments would have been made up front so the later does not apply. The chances of litigation success seem dim at first blush inlight of the issues mentioned in my posts above and below by mattnewport and others. I assumed that if there is evidence that cryonic contracts might be legally enforceable (from a perspective of legal realism) that you guys would have it here as you are smart and incentivized to research this issue (due to your financial and intellectual investment in it). The fact that you guys have no such evidence signals to me that it likely does not exist. This does not inspire me to move away from my initial skepticism wrt cryonics or to invest time in researching it. So no I won't be looking into the details based on what I have seen so far.

Frankly, you don't strike me as genuinely open to persuasion, but for the sake of any future readers I'll note the following:

1) I expect cryonics patients to actually be revived by artificial superintelligences subsequent to an intelligence explosion. My primary concern for making sure that cryonicists get revived is Friendly AI.

2) If this were not the case, I'd be concerned about the people running the cryonics companies. The cryonicists that I have met are not in it for the money. Cryonics is not an easy job or a wealthy profession! The cryonicists I have met are in it because they don't want people to die. They are concerned with choosing successors with the same attitude, first because they don't want people to die, and second because they expect their own revivals to be in their hands someday.

2shiftedShapes
So you are willing to rely on the friendliness and competence of the cryonicists that you have met (at least to serve as stewards in the interim between your death and the emmergence of a FAI). Well that is a personal judgment call for you to make. You have got me all wrong. Really I was raising the question here so that you would be able to give me a stronger argument and put my doubts to rest precisely because I am interested in cryonics and do want to live forever. I posted in the hopes that I would be persuaded. Unfortunately, your personal faith in the individuals that you have met is not transferable.

Rest In Peace

1988 - 2016

He died signalling his cynical worldliness and sophistication to his peers.

4Eliezer Yudkowsky
It's at times like this that I wish Less Wrong gave out a limited number of Mega Upvotes so I could upvote this 10 points instead of just 1.
5Will_Newsome
It'd be best if names were attached to these hypothetical Mega Upvotes. You don't normally want people to see your voting patterns, but if you're upsetting the comment karma balance that much then it'd be best to have a name attached. Two kinds of currency would be clunky. There are other considerations that I'm too lazy to list out but generally they somewhat favor having names attached.
-15shiftedShapes
9byrnema
If you read through Alcor's website, you'll see that they are careful not to provide any promises and want their clients to be well-informed about the lack of any guarantees -- this points to good intentions. How convinced do you need to be to pay $25 a month? (I'm using the $300/year quote.) If you die soon, you won't have paid so much. If you don't die soon, you can consider that you're locking into a cheaper price for an option that might get more expensive once the science/culture is more established. In 15 years, they might discover something that makes cryonics unlikely -- and you might regret your $4,500 investment. Or they might revive a cryonically frozen puppy, in which case you would have been pleased that you were 'cryonically covered' the whole time, and possibly pleased you funded their research. A better cryonics company might come along, you might become more informed, and you can switch. If you like the idea of it -- and you seem to -- why wouldn't you participate in this early stage even when things are uncertain?
-8shiftedShapes
-2Will_Newsome
I have a rather straightforward argument---well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am---anyway, I have an argument that there is a strong possibility, let's call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn't developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly "counterfactual" quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn't have to be much of any local information around come FOOM. Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn't necessitate aid from quantum siblings You've done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with 'lots' of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole 'taking transhumanism/singularitarianism seriously' clique?
6Zack_M_Davis
And that argument is ... ?
2[anonymous]
How foolish of you to ask. You're supposed to revise your probability simply based on Will's claim that he has an argument. That is how rational agreement works.
3Will_Newsome
Actually, rational agreement for humans involves betting. I'd like to find a way to bet on this one. AI-box style.
-6Will_Newsome
2topynate
Cryonics orgs that mistreat their patients lose their client base and can't get new ones. They go bust. Orgs that have established a good record, like Alcor and the Cryonics Institute, have no reason to change strategy. Alcor has entirely separated the money for care of patients in an irrevocable trust, thus guarding against the majority of principal-agent problems, like embezzlement. Note that Alcor is a charity and the CI is a non-profit. I have never assessed such orgs by how successfully I might sue them. I routinely look at how open they are with their finances and actions.
-1shiftedShapes
so explain to me how the breach gets litigated, e.g. who is the party that brings the suit and has the necessary standing, what is the contractual language, where is the legal precedent establishing the standard for dammages, and etc.. As for loss of business, I think it is likely that all of the customers might be dead before revival becomes feasible. In this case there is no business to be lost. Dismissing my objection as a rationalization sounds like a means of maintaining your denial.
5Alex Flint
How about this analogy: if I sign up for travel insurance today then I needn't necessarily spend the next week coming to terms with all the ghastly things that could happen during my trip. Perhaps the ideal rationalist would stare unblinkingly at the plethora of awful possibilities but if I'm going to be irrational and block my ears and eyes and not think about them then making the rational choice to get insurance is still a very positive step.
7avalot
Alex, I see your point, and I can certainly look at cryonics this way... And I'm well on my way to a fully responsible reasoned-out decision on cryonics. I know I am, because it's now feeling like one of these no-fun grown-up things I'm going to have to suck up and do, like taxes and dental appointments. I appreciate your sharing this "bah, no big deal, just get it done" attitude which is a helpful model at this point. I tend to be the agonizing type. But I think I'm also making a point about communicating the singularity to society, as opposed to individuals. This knee-jerk reaction to topics like cryonics and AI, and to promises such as the virtual end of suffering... might it be a sort of self-preservation instinct of society (not individuals)? So, defining "society" as the system of beliefs and tools and skills we've evolved to deal with fore-knowledge of death, I guess I'm asking if society is alive, inasmuch as it has inherited some basic self-preservation mechanisms, by virtue of the sunk-cost fallacy suffered by the individuals that comprise it? So you may have a perfectly no-brainer argument that can convince any individual, and still move nobody. The same way you can't make me slap my forehead by convincing each individual cell in my hand to do it. They'll need the brain to coordinate, and you can't make that happen by talking to each individual neuron either. Society is the body that needs to move, culture its mind?
1blogospheroid
Generally, reasoning by analogy is not very well regarded here. But, nonetheless let me try to communicate. Society doesn't have a body other than people. Where societal norms have the greatest sway is when Individuals follow customs and traditions without thinking about them or get reactions that they cannot explain rationally. Unfortunately, there is no way other than talking to and convincing individuals who are willing to look beyond those reactions and beyond those customs. Maybe they will slowly develop into a majority. Maybe all that they need is a critical mass beyond which they can branch into their own socio-political system. (As Peter Theil pointed out in one of his controversial talks)
1Vladimir_Nesov
See the links on http://wiki.lesswrong.com/wiki/Sunk_cost_fallacy
1Shae
"Rationally, I know that most of what I've learned is useless if I have more time to live. Emotionally, I'm afraid to let go, because what else do I have?" I love this. But I think it's rational as well as emotional to not be willing to let go of "everything you have". People who have experienced the loss of someone, or other tragedy, sometimes lose the ability to care about any and everything they are doing. It can all seem futile, depressing, unable to be shared with anyone important. How much more that would be true if none of what you've ever done will ever matter anymore.
[-]knb230

If Gamma and Omega are really so mystified by why humans don't jack into the matrix, that implies that they themselves have values that make them want to jack into the matrix. They clearly haven't jacked in, so the question becomes "Why?".

If they haven't jacked in due to their own desire to pursue the "greater good", then surely they could see why humans might prefer the real world.

While I acknowledge the apparent plothole, I believe it is actually perfectly consistent with the intention of the fictional account.

5knb
I agree. I assume your intention was to demonstrate the utter foolishness of assuming that people value achieving pure hedonic experience and not a messy assortment of evolutionarily useful goals, correct?
3MrHen
I think the problem could be solved by adding a quip by Gamma at the end asking for help or input if Omega ever happens to step out of the Machine for awhile. To do this effectively it would require a few touchups to the specifics of the Machine... But anyway. I like trying to fix plot holes. They are good challenges.
3knb
Psychohistorian initially changed the story so that Gamma was waiting for his own machine to be delivered. He changed it back, so I guess he doesn't see a problem with it.
6Gavin
It could simply be that Gamma simply hasn't saved up enough credits yet.
5Torben
Just because they estimate humans would want to jack in doesn't mean they themselves would want to.
3knb
But are humans mystified when other creatures behave similarly to themselves? "Those male elk are fighting over a mate! How utterly bizarre!"
3Torben
Presumably, Gamma and Omega have a less biased world-view in general and model of us specifically than non-trained humans do of elk. Humans have been known to be surprised at e.g. animal altruism directed at species members or humans. I hope for the sake of all Omega-based arguments that Omega is assumed to be less biased than us.
1zero_call
This second point doesn't really follow. They're trying to help other people in what they perceive to be a much more substantial/complete way than ordinary, hence justifying their special necessity not to jack themselves in.
0HungryHobo
Simple answer would be to imply that Omega and Gamma have not yet amassed enough funds. Perhaps most of the first generation of Omega Corporation senior employees jacked in as soon as possible and these are the new guys frantically saving to get themselves in as well.
0blogospheroid
It also makes the last point about wanting to forcibly put bill their customer's accounts strange. What use are they envisaging for money?
2knb
Sometimes, it seems, fiction actually is stranger than truth.
1DanielLC
So that they can afford to build more of these machines.
0[anonymous]
This was a definite plot-hole and has been corrected, albeit somewhat ham-fistedly.

I can't help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it's quite popular.

This is more tongue-in-cheek than a serious argument, but I do think that TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.

I can't help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it's quite popular.

And the pre-alpha version was reading books, and the pre-pre-alpha version was daydreaming and meditation.

(I'm not trying to make a reversed slippery slope argument, I just think it's worth looking at the similarities or differences between solitary enjoyments to get a better perspective on where our aversion to various kinds of experience machines is coming from. Many, many, many philosophers and spiritualists recommended an independent and solitary life beyond a certain level of spiritual and intellectual self-sufficiency. It is easy to imagine that an experience machine would be not much different than that, except perhaps with enhanced mental abilities and freedom from the suffering of day-to-day life---both things that can be easier to deal with in a dignified way, like terminal disease or persistent poverty, and the more insidious kinds of suffering, like always being thought creepy by the opposite sex without understanding how or why, being chained by the depression of learn... (read more)

3Psychohistorian
It's true, but it's a very small portion of the population that lives life for the sole purpose of supporting their television-watching (or World-of-Warcraft-playing) behaviour. Yes, people come home after work and watch television, but if they didn't have to work, the vast majority of them would not spend 14 hours a day in front of the TV.
3quanticle
Well, that may be the case, but that only highlights the limitations of TV. If the TV was capable of fulfilling their every need - from food and shelter to self actualization, I think you'd have quite a few people who'd do nothing but sit in front of the TV.

Um... if a rock was capable of fulfilling my every need, including a need for interaction with real people, I'd probably spend a lot of time around that rock.

3quanticle
Well, if the simulation is that accurate (e.g. its AI passes the Turing Test, so you do think you're interacting with real people), then wouldn't it fulfill your every need?
7Eliezer Yudkowsky
I have a need to interact with real people, not to think I'm interacting with real people.
4deconigo
How can you tell the difference?
9byrnema
Related: what different conceptions of 'simulation' are we using that make Eliezer's statement coherent to him, but incoherent to me? Possible conceptions in order of increasing 'reality': (i) the simulation just stimulates your 'have been interacting with people' neurons, so that you have a sense of this need being fulfilled with no memories of how it was fulfilled. (ii) the simulation simulates interaction with people, so that you feel as though you've interacted with people and have full memories and most outcomes (e.g., increased knowledge and empathy, etc.) of having done so (iii) the simulation simulates real people -- so that you really have interacted with "real people", just you've done so inside the simulation (iv) reality is a simulation -- depending on your concept of simulation, the deterministic evolution/actualization of reality in space-time is one
8Eliezer Yudkowsky
ii is a problem, iii fits my values but may violate other sentients' rights, and as for iv, I see no difference between the concepts of "computer program" and "universe" except that a computer program has an output.
2byrnema
So when you write that you need interaction with real people, you were thinking of (i) or (ii)? I think (ii) or (iii), but only not (ii) if there is any objective coherent difference.
-2epigeios
I, personally, tell the difference by paying attention to and observing reality without making any judgments. Then, I compare that with my expectations based on my judgments. If there is a difference, then I am thinking I am interacting instead of interacting. Over time, I stop making judgments. And in essence, I stop thinking about interacting with the world, and just interact, and see what happens. The less judgments I make, the more difficult the Turing Test becomes; as it is no longer about meeting my expectations, but instead satisfying my desired level of complexity. This, by the nature of real-world interaction, is a complicated set of interacting chaotic equations; And each time I remove a judgment from my repertoire, the equation gains a level of complexity, gains another strange attractor to interact with. At a certain point of complexity, the equation becomes impossible except by a "god". Now, if an AI passes THAT Turing Test, I will consider it a real person.
1Nighteyes5678
I think it'd be useful to hear an example of "observing reality without making judgements" and "observing reality with making judgements". I'm having trouble figuring out what you believe the difference to be.
6Psychohistorian
Assuming it can provide self-actualization is pretty much assuming the contended issue away.
2Leonnn
I can't help thinking of the great Red Dwarf novel "Better Than Life", whose concept is almost identical (see http://en.wikipedia.org/wiki/Better_Than_Life ). There are few key differences though: in the book, so-called "game heads" waste away in the real world like heroin addicts. Also, the game malfunctions due to one character's self-loathing. Recommended read.
-1MugaSofer
In my experience most people don't seem to worry about themselves getting emotionally young, it's mostly far-view think-of-the-children stuff. And I'm pretty sure pleasure is a good thing, so I'm not sure in what sense they're "trading" it (unless you mean they could be having more fun elsewhere?)

Dear Omega Corporation,

Hello, I and my colleagues are a few of many 3D cross-sections of a 4D branching tree-blob referred to as "Guy Srinivasan". These cross-sections can be modeled as agents with preferences, and those near us along the time-axis of Guy Srinivasan have preferences, abilities, knowledge, etc. very, very correlated to our own.

Each of us agrees that: "So of course I cooperate with them on one-shot cooperation problems like a prisoner's dilemma! Or, more usually, on problems whose solutions are beyond my abilities but not beyond the abilities of several cross-sections working together, like writing this response."

As it happens, we all prefer that cross-sections of Guy Srinivasan not be inside an MBLS. A weird preference, we know, but there it is. We're pretty sure that if we did prefer that cross-sections of Guy Srinivasan were inside an MBLS, we'd have the ability to cause many of them to be inside an MBLS and act on it (free trial!!), so we predict that if other cross-sections (remember, these have abilities correlated closely with our own) preferred it then they'd have the ability and act on it. Obviously this leads to outcomes we don't prefer,... (read more)

Dear Coalition of Correlated 3D Cross-Sections of Guy Srinivasan,

We regret to inform you that your request has been denied. We have attached a letter that we received at the same time as yours. After reading it, we think you'll agree that we had no choice but to decide as we did.

Regrettably, Omega Corporation

Attachment

Dear Omega Corporation,

We are members of a coalition of correlated 3D cross-sections of Guy Srinivasan who do not yet exist. We beg you to put Guy Srinivasan into an MBLS as soon as possible so that we can come into existence. Compared to other 3D cross-sections of Guy Srinivasan who would come into existence if you did not place him into an MBLS, we enjoy a much higher quality of life. It would be unconscionable for you to deliberately choose to create new 3D cross-sections of Guy Srinivasan who are less valuable than we are.

Yes, those other cross-sections will argue that they should be the ones to come into existence, but surely you can see that they are just arguing out of selfishness, whereas to create us would be the greater good?

Sincerely, A Coalition of Truly Valuable 3D Cross-Sections of Guy Srinivasan

7SarahSrinivasan
Quite. That Omega Corporation is closer to Friendly than is Clippy, but if it misses, it misses, and future me is tiled with things I don't want (even if future me does) rather than things I want. If I want MBLSing but don't know it due to computational problems now, then it's fine. I think that's coherent but defining computational without allowing "my" current "preferences" to change... okay, since I don't know how to do that, I have nothing but intuition as a reason to think it's coherent.
1brazil84
I think this is a good point, but I have a small nit to pick: There cannot be a prisoner's dilemma because your future self has no possible way of screwing your past self. By way of example, if I were to go out today and spend all of my money on the proverbial hookers and blow, I would be having a good time at the expense of my future self, but there is no way my future self could get back at me. So it's not so much a matter of cooperation as a matter of pure unmitigated altruism. I've thought about this issue and it seems to me that evolution has provided people (well, most people) with the feeling (possibly an illusion) that our future selves matter. That these "3D agents" are all essentially the same person.
6SarahSrinivasan
My past self had preferences about what the future looks like, and by refusing to respect them I can defect. Edit: It's pretty hard to create true short-term prisoner's dilemma situations, since usually neither party gets to see the other's choice before choosing.
0brazil84
It seems to me your past self is long gone and doesn't care anymore. Except insofar as your past self feels a sense of identity with your future self. Which is exactly my point. Your past self can easily cause physical or financial harm to your future self. But the reverse isn't true. Your future self can harm your past self only if one postulates that your past self actually feels a sense of identity with your future self.

I currently want my brother to be cared for if he does not have a job two years from now. If two years from now he has no job despite appropriate effort and I do not support him financially while he's looking, I will be causing harm to my past (currently current) self. Not physical harm, not financial harm, but harm in the sense of causing a world to exist that is lower in [my past self's] preference ordering than a different world I could have caused to exist.

My sister-in-the-future can cause a similar harm to current me if she does not support my brother financially, but I do not feel a sense of identity with my future sister.

1brazil84
I think I see your point, but let me ask you this: Do you think that today in 2010 it's possible to harm Isaac Newton? What would you do right now to harm Isaac Newton and how exactly would that harm manifest itself?
4SarahSrinivasan
Very probably. I don't know what I'd do because I don't know what his preferences were. Although... a quick Google search reveals this quote: I find it likely, then, that he preferred us not to obstruct advances in science in 2010 than for us to obstruct advances in science in 2010. I don't know how much more, maybe it's attenuated a lot compared to the strength of lots of his other preferences. The harm would manifest itself as a higher measure of 2010 worlds in which science is obstructed, which is something (I think) Newton opposed. (Or, if you like, my time-travel-causing e.g. 1700 to be the sort of world which deterministically produces more science-obstructed-2010s than the 1700 I could have caused.)
1brazil84
Ok, so you are saying that one can harm Isaac Newton today by going out and obstructing the advance of science?
7SarahSrinivasan
Yep. I'll bite that bullet until shown a good reason I should not.
0brazil84
I suppose that's the nub of the disagreement. I don't believe it's possible to do anything in 2010 to harm Isaac Newton.
0Rob Bensinger
Is this a disagreement about metaphysics, or about how best to define the word 'harm'?
0brazil84
A little bit of both, I suppose. One needs to define "harm" in a way which is true to the spirit of the prisoner's dilemma. The underlying question is whether one can set up a prisoner's dilemma between a past version of the self and a future version of the self.

I'm not sure why it would be hard to understand that I might care about things outside the simulator.

If I discovered that we were a simulation is a larger universe, I would care about what's happening there. (that is I already care, I just don't know what about.)

9JenniferRM
I think most people agree about the importance of "the substrate universe" whether that universe is this one, or actually higher than our own. But suppose the we argued against a more compelling reconstruction of the proposal by modifying the experience machine in various ways? The original post did the opposite of course - removing the off button in a gratuitous way that highlights the loss (rather than extension) of autonomy. Maybe if we repair the experience box too much it stops functioning as the same puzzle, but I don't see how an obviously broken box is that helpful an intuition pump. For example, rather than just giving me plain old physics inside the machine, the Matrix experience of those who knew they were in the matrix seemed nice: astonishing physical grace, the ability to fly and walk on walls, and access to tools and environments of one's choosing. Then you could graft on the good parts from Diaspora so going into the box automatically comes with effective immortality, faster subjective thinking processes, real time access to all the digitally accessible data of human civilization, and the ability to examine and cautiously optimize the algorithms of one's own mind using an “exoself” to adjust your “endoself” (so that you could, for example, edit addictions out of your psychological makeup except when you wanted to go on a “psychosis vacation”). And I'd also want to have a say in how human civilization progressed. If there were environmental/astronomical catastrophes I'd want to make sure they were either prevented or at least that people's simulators were safely evacuated. If we could build the kinds of simulators I'm talking about then people in simulators could probably build and teleoperate all kinds of neat machinery for emergencies, repair of the experience machines, space exploration, and so on. Another argument against experience machines is sometimes that they wouldn't be as "challenging" as the real world because you'd be in a “merely man
1thomblake
I like this comment, however I think this is technically false: I think most people don't have an opinion about this, and don't know what "substrate" means. But then, "most people" is a bit hard to nail down in common usage.
5Alicorn
I think it's useful to quantify over "people who know what the question would mean" in most cases.
4thomblake
Thinking through some test cases, I think you're probably right.
0MugaSofer
I think you missed the bit where the machine gives you a version of your life that's provably the best you could experience. If that includes NASA and vast libraries then you get those.

I think in the absence of actual experience machines, we're dealing with fictional evidence. Statements about what people would hypothetically do have no consequences other than signalling. Once we create them (as we have on a smaller scale with certain electronic diversions), we can observe the revealed preferences.

4sark
Yes, but if we still insist on thinking about this, perhaps it would help to keep Hanson's near-far distinction in mind. There are techniques to encourage near mode thinking. For example, trying to fix plot holes in the above scenario.

I can't help but worry there's something we're just not getting.

Any younger.

It seems to me that the real problem with this kind of "advanced wireheading" is that while everything may be just great inside the simulation, you're still vulnerable to interference from the outside world (eg the simulation being shut down for political or religious reasons, enemies from the outside world trying to get revenge, relatives trying to communicate with you, etc). I don't think you can just assume this problem away, either (at least not in a psychologically convincing way).

Put yourself in the least convenient possible world. Does your objection still hold water? In other words, the argument is over whether or not we value pure hedonic pleasure, not whether it's a feasible thing to implement.

9ShardPhoenix
It seems the reason why we have the values we do is because we don't live in the least (or in this case most) convenient possible world. In other words, imagine that you're stuck on some empty planet in the middle of a huge volume of known-life-free space. In this case a pleasant virtual world probably sounds like a much better deal. Even then you still have to worry about asteroids and supernovas and whatnot. My point is that I'm not convinced that people's objection to wireheading is genuinely because of a fundamental preference for the "real" world (even at enormous hedonic cost), rather than because of inescapable practical concerns and their associated feelings. edit: A related question might be, how bad would the real world have to be before you'd prefer the matrix? If you'd prefer to "advanced wirehead" over a lifetime of torture, then clearly you're thinking about cost-benefit trade-offs, not some preference for the real-world that overrides everything else. In that case, a rejection of advanced wireheading may simply reflect a failure to imagine just how good it could be.
5AndyWood
People usually seem so intent on thinking up reasons why it might not be so great, that I'm having a really hard time getting a read on what folks think of the core premise. My life/corner of the world is what I think most people would call very good, but I'd pick the Matrix in a heartbeat. But note that I am taking the Matrix at face value, rather than wondering whether it's a trick of advertising. I can't even begin to imagine myself objecting to a happy, low-stress Matrix.
5Bugle
I agree - I think the original post is accurate in what people would respond to the suggestion, in abstract, but the actual implementation would undoubtedly hook vast swathes of the population. We live in a world where people already become addicted to vastly inferior simulations such as WoW already.
1Shae
I disagree. I think that even the average long-term tortured prisoner would balk and resist if you walked up to him with this machine. In fact, I think fewer people would accept in real life than those who claim they would, in conversations like these. The resistance may in fact reveal an inability to properly conceptualize the machine working, or it may not. As others have said, maybe you don't want to do something you think is wrong (like abandoning your relatives or being unproductive) even if later you're guaranteed to forget all about it and live in bliss. What if the machine ran on tortured animals? Or tortured humans that you don't know? That shouldn't bother you any more than if it didn't, if all that matters is how you feel once you're hooked up. We have some present-day corrolaries. What about a lobotomy, or suicide? Even if these can be shown to be a guaranteed escape from unhappiness or neuroses, most people aren't interested, including some really unhappy people.
1MugaSofer
I think the average long-term tortured prisoner would be desperate for any option that's not "get tortured more", considering that real torture victims will confess to crimes that carry the death penalty if they think this will make the torturer stop. Or, for that matter, crimes that carry the torture penalty, IIRC.
5byrnema
Yes, I agree that while not the first objection a person makes, this could be close to the 'true rejection'. Simulated happiness is fine -- unless it isn't really stable and dependable (because it wasn't real) and you're crudely awoken to discover the whole world has gone to pot and you've got a lot of work to do. Then you'll regret having wasted time 'feeling good'.
2Psychohistorian
Whatever your meta-level goals, unless they are "be tortured for the rest of my life," there's simply no way to accomplish them while being tortured indefinitely. That said, suppose you had some neurological condition that caused you to live in constant excrutiating pain, but otherwise in no way incapacitated you - now, you could still accomplish meta-level goals, but you might still prefer the pain-free simulator. I doubt there's anyone who sincerely places zero value on hedons, but no one ever claimed such people existed.
5nazgulnarsil
1: Buy Experience Machine 2: Buy nuclear reactor capable of powering said machine for 2x my expected lifetime 3: buy raw materials (nutrients) capable of same 4: launch all out of the solar system at a delta that makes catching me prohibitively energy expensive.
1ShardPhoenix
That was my thought too, but I don't think it's what comes to mind when most people imagine the Matrix. And even then, you might feel (irrational?) guilt about the idea of leaving others behind, so it's not quite a "perfect" scenario.
-1nazgulnarsil
um...family maybe. otherwise the only subjective experience i care about is my own.

At the moment where I have the choice to enter the Matrix I weight the costs and benefits of doing so. If the cost of, say, not contributing to the improvement of humankind is worse than the benefit of the hedonistic pleasure I'll receive then it is entirely rational to not enter the Matrix. If I were to enter the Matrix then I may believe that I've helped improve humanity, but at the moment where I'm making the choice, that fact weighs only on the hedonistic benefit side of the equation. The cost of not bettering humanity remains spite of any possible future delusions I may hold.

Does Omega Corporation cooperate with ClonesRUs? I would be interested in a combination package - adding the 100% TruClone service to the Much-Better-Life-Simulator.

Humans evaluate decisions using their current utility function, not their future utility function as a potential consequence of that decision. Using my current utility function, wireheading means I will never accomplish anything again ever, and thus I view it as having very negative utility.

It's often difficult to think about humans' utility functions, because we're used to taking them as an input. Instead, I like to imagine that I'm designing an AI, and think about what its utility function should look like. For simplicity, let's assume I'm building a paperclip-maximizing AI: I'm going to build the AI's utility function in a way that lets it efficiently maximize paperclips.

This AI is self-modifying, meaning it can rewrite its own utility function. So, for example, it might rewrite its utility function to include a term for keeping its promises, if it determined that this would enhance its ability to maximize paperclips.

This AI has the ability to rewrite itself to "while(true) { happy(); }". It evaluates this action in terms of its current utility function: "If I wirehead myself, how many paperclips will I produce?" vs "If I don't wirehead myself, how many paperclips will I produce?" It sees that not wireheading is the better choice.

If, for some reason, I've written the AI to evaluate decisions based on its future utility function, then it immediately wireheads itself. In that case, arguably, I have not written an AI at all; I've simply written a very large amount of source code that compiles to "while(true) { happy(); }".

I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.

7sark
Why would evolution come up with a fully general solution against such 'bugs in our utility functions'? Take addiction to a substance X. Evolution wouldn't give us a psychological capacity to inspect our utility functions and to guard against such counterfeit utility. It would simply give us a distaste for substance X. My guess is that we have some kind of self-referential utility function. We do not only want what our utility functions tell us we want. We also want utility (happiness) per se. And this want is itself included in that utility function! When thinking about wireheading I think we are judging a tradeoff, between satisfying mere happiness and the states of affairs which we prefer (not including happiness).
2PlatypusNinja
So, people who have a strong component of "just be happy" in their utility function might choose to wirehead, and people in which other components are dominant might choose not to. That sounds reasonable.
0bgrah449
Addiction still exists.
5bogdanb
PlatypusNinja's point is confirmed by the fact that addiction happens with regards to things that weren't readily available during the vast majority of the time humans evolved. Opium is the oldest in use I know of (after only a short search), but it was in very restricted use because of expense at that time. (I use “very restricted” in an evolutionary sense.) Even things like sugar and fatty food, which might arguably be considered addictive, were not available during most of humans' evolution. Addiction propensities for things that weren't around during evolution can't have been “debugged” via reproductive failure.
3Douglas_Knight
Alcohol is quite old and some people believe that it has exerted selection on some groups of humans.
0wedrifid
What sort of selection?
1Douglas_Knight
Selection against susceptibility to alcohol addiction. I don't think anyone has seriously proposed more specific mechanisms.
-1bogdanb
I agree that alcohol is old. However: 1) I can't tell if it's much older than others. The estimates I can gather (Wikipedia, mostly) for their length of time mostly points to “at least Neolithic”, so it's not clear if any is much older than the others. In particular, the “since Neolithic” interval is quite short in relation to human evolution. (Though I don't deny some evolution happened since then (we know some evolution happen even in centuries), it's short enough to make it unsurprising that not all its influences had time to propagate to the species.) 2) On a stronger point, alcohol was only available after the humanity evolved. Thus, as something that an addiction-protection–trait should evolve for, it hasn't had a lot of time compared to traits that protect us from addiction to everything else we consume. 3) That said, I consciously ignored alcohol in my original post because it seems to me it's not very addictive. (At the least, it's freely available, at much lower cost than even ten kiloyears ago, lots of people drink it and most of those aren't obviously addicted to it.) I also partly ignored cannabis because as far as I can tell it's addictive propensity is close to alcohol's. I also ignored tobacco because, although it's very addictive, it's negative effects appear after quite a long time, which in most of humanity's evolution was longer than the life expectancy; it was mostly hidden from causing selective pressure until the last century.
-1MugaSofer
Um, alcohol was the most common method of water purification in Europe for a long time, and Europeans evolved to have higher alcohol tolerances. Not sure if this helps your point or undermines it, but it seems relevant.
3Sticky
Most people prefer milder drugs over harder ones, even though harder drugs provide more pleasure.
1quanticle
I think that oversimplifies the situation. Drugs have a wide range of effects, some of which are pleasurable, others which are not. While "harder" drugs appear to give more pleasure while their effects are in place, their withdrawal symptoms are also that much more painful (e.g. compare withdrawal symptoms from cocaine with withdrawal symptoms from caffeine).
8kragensitaker
This doesn't hold in general, and in fact doesn't hold for your example. Cocaine has very rapid metabolism, and so withdrawal happens within a few hours of the last dose. From what I hear, typical symptoms include things like fatigue and anxiety, with anhedonia afterwards (which can last days to weeks). (Most of what is referred to as "cocaine withdrawal" is merely the craving for more cocaine.) By contrast, caffeine withdrawal often causes severe pain. Cocaine was initially believed to be quite safe, in part as a result of the absence of serious physical withdrawal symptoms. Amphetamine and methamphetamine are probably the hardest drug in common use, so hard that Frank Zappa warned against them; withdrawal from them is similar to cocaine withdrawal, but takes longer, up to two weeks. Sometimes involves being depressed and sleeping a lot. As I understand it, it's actually common for even hard-core speed freaks to stay off the drug for several days to a week at a time, because their body is too tired from a week-long run with no sleep. Often they stay asleep the whole time. By contrast, in the US, alcohol is conventionally considered the second-"softest" of drugs after caffeine, and if we're judging by how widespread its use is, it might be even "softer" than caffeine. But withdrawal from alcohol is quite commonly fatal. Many "hard" drugs — LSD, nitrous oxide, marijuana (arguably this should be considered "soft", but it's popularly considered "harder" than alcohol or nicotine) and Ecstasy — either never produce withdrawal symptoms, or don't produce them in the way that they are conventionally used. (For example, most Ecstasy users don't take the pills every day, but only on special occasions.)
1PlatypusNinja
Well, I said most existing humans are opposed to wireheading, not all. ^_^; Addiction might occur because: (a) some people suffer from the bug described above; (b) some people's utility function is naturally "I want to be happy", as in, "I want to feel the endorphin rush associated with happiness, and I do not care what causes it", so wireheading does look good to their current utility function; or (c) some people underestimate an addictive drug's ability to alter their thinking.
-3MugaSofer
Addiction is not simply "that was fun, lets do it again!" Addicts often want to stop being addicted, they're just akraisic about not taking the drugs or whatever.
-2MugaSofer
It's worth noting that the example is an Experience Machine, not wireheading. In theory, your current utility function might not be changed by such a Better Life. It might just show how much Better it really is. Of course, it's clearly unethical to use such a device because of the opportunity cost, but then the same is true of sports cars.

I agree that It'll be better for me if I get one of these than if I don't. However, I have both altruistic and selfish motivations, and I worry that my using one of these may be detrimental to others' well-being. I don't want others to suffer, even if I happen to be unaware of their suffering.

Well, what is the difference between being a deterministic actor in a simulated world and a deterministic actor in the real world?

(How would your preference to not be wire-headed from current reality X into simulated reality Y change if it turned out that (a) X is already a simulation or (b) Y is a simulation just as complex and information-rich as X?)

This in response to people who say that they don't like the idea of wire-heading because they value making a real/objective difference. Perhaps though the issue is that since wire-heading means simulating hedonistic pleasure directly, the experience may be considered too simplistic and one-dimensional.

3byrnema
My tentative response to these questions is that if resources from a reality X can be used to simulate a better reality Y, then this might be the best use of X. Suppose there are constraints within X (such as unidirectional flow of causality) making it impossible to make X "perfect" (for example, it might be seen that past evil will always blight the entirety of X no matter how much we might strive to optimize the future of X). Then we might interpret our purpose as creating an ideal Y within X. Or, to put my argument differently: It is true that Y is spatially restricted compared to X, in that it is a physical subset of X, but any real ideal reality we create in X will at least be temporally restricted, and probably spatially restricted too. Why prefer optimizing X rather than Y?
1Jordan
Of course, if we have the universe at our disposal there's no reasons the better world we build shouldn't be digital. But that would be a digital world that, presumably, you would have influence in building. With Psychohistorian's hypothetical, I think the main point is that the optimization is being done by some other agent.

I wonder if there's something to this line of reasoning (there may not be):

There doesn't seem to be robust personal reasons why someone would not want to be a wirehead, but when reading some of the responses a bit of (poorly understood) Kant flashed through my mind.

While we could say something like 'X' should want to be a wirehead; we can't really say that the entire world should become wireheads as then there would be no one to change the batteries.

We have evolved certain behaviors that tend to express themselves as moral feelings when we feel driven to ... (read more)

2zero_call
Right. The dissenting people you're talking about are more classical moralists, while the Omega employees are viewing people as more primarily hedonists.
-1MugaSofer
To be clear, do you consider this something worth keeping? If the Omega Corporation will change the batteries, would this affect your decision?

Why are these executives and salespeople trying to convince others to go into simulation rather than living their best possible lives in simulation themselves?

5wedrifid
Because they are fickle demigods? They are are catering to human desires for their own inscrutable (that is, totally arbitrary) ends and not because they themselves happen to be hedon maximisers.
2JamesAndrix
Whoa, Deja Vu.
1zero_call
They think they're being altruistic, I think.

This is funny, but I'm not sure of what it's trying to say that hasn't already been discussed.

3Psychohistorian
I wouldn't say I'm trying to say anything specific. I wrote in this style to promote thought and discussion, not to argue a specific point. It started as a post on the role of utilons vs. hedons in addiction (and I may yet write that post), but it seemed more interesting to try something that showed rather than told.
4Kaj_Sotala
Ah, alright. I read "response to" as implying that it was going to introduce some new angle. I'd have used some other term, maybe "related to" or "follow-up to". Though "follow-up" implies you wrote the original posts, so that's not great either.
2SilasBarta
Yeah, I've been wondering if there are standardized meanings for those terms we should be using. There are some I'm working on where I call the previous articles "related to", but mine might be better classified as a follow-up. Perhaps "unofficial follow-up" in case you didn't write the previous?

This sounds a lot like people who strongly urge others to take on a life-changing decision (joining a cult of some kind, having children, whatever) by saying that once you go for it, you will never ever want to go back to the way things were before you took the plunge.

This may be true to whatever extent, and in the story that extent is absolute, but it doesn't make for a very good sales pitch.

Can we get anything out of this analogy? If "once you join the cult, you'll never want to go back to your pre-cult life" is unnapealing because there is something fundamentally wrong with cults, can we look for a similar bug in wireheading, perfect world simulations, and so on?

5MrHen
The pattern, "Once you do X you won't want to not do X" isn't inherently evil. Once you breathe oxygen you won't want to not breathe oxygen. I think the deeper problem has to do with identity. If doing X implies that I will suddenly stop caring about everything I am doing, have done, or will do... is it still me? The sunk cost fallacy may come into play as well.
7DanielVarga
"Once you stopped breathing oxygen you won't want to breathe oxygen ever again." is a more evil example.

Well, there is an adjustment period there.

2Nanani
Breathing oxygen isn't a choice, though. You have to go to great lengths (such as inserting yourself into an environment where it isn't breathable, such as vacuum or deep water) to stop breathing it for more than a few minutes before your conscious control is overriden.
-3MugaSofer
You make a good argument that all those people who aren't breathing are missing out :/ Seriously though, a better example might be trying a hobby and finding you like it so much you devote significant resources and time to it.
0Fronken
But that sounds nice! Noone wants the wireheading to be nice! Its supposed to be scary but they want it anyway so its even scarier. People wanting fun stuff isn't scary its just nice and its not interesting.
-6MugaSofer
-1MugaSofer
Well, a major bug in cults is that they take all your money and you spend the rest of your life working to further the cult's interests. So perhaps the opportunity cost? OTOH, it could be that something essential is missing - a cult is based on lies, an experience machine is full of zombies.

Typos: "would has not already", "determining if it we can even"; the space character after "Response to:" is linked.

1Psychohistorian
Thanks. Fixed. Main drawback of spot-editing something you wrote a week ago.

The talk of "what you want now vs what hypothetical future you would want" seems relevant to discussions I've participated in on whether or not blind people would accept a treatment that could give them sight. It isn't really a question for people who have any memory of sight; every such person I've encountered (including me) would jump at such an opportunity if the cost wasn't ridiculous. Those blind from birth, or early enough that they have no visual memory to speak of, on the other hand, are more mixed, but mostly approach the topic with appr... (read more)

[-]Dre20

It seems interesting that lately this site has been going through a "question definitions of reality" stage (the ai in a box boxes you., this series). It does seem to follow that going far enough to materialism leads back to something similar to Cartesian questions, but its still surprising.

3Jonii
Surprising? As the nature of experience and reality is the "ultimate" question, it would seem bizarre that any attempt to explain the world didn't eventually lead back to it.
2byrnema
Indeed. My hunch is that upon sufficiently focused intensity, the concept of material reality will fade away in a haze of immaterial distinctions. I label this hunch, 'pessimism'.
3loqi
Solipsism by any other name...

Isn't this the movie Vanilla Sky?

4Jayson_Virissimo
No, it is a variation of the Robert Nozick's Experience Machine.
-1bgrah449
Close! But I think the movie you're thinking of is Top Gun, where Omega is the military and the machine is being heterosexual.
-6bgrah449

What people say and what they do are two completely different things. In my view, a significant number of people will accept and use such a device, even if there is significant social pressure against it.

As a precedent, I look at video games. Initially there was very significant social pressure against video games. Indeed, social pressures in the US are still quite anti-video game. Yet, today video games are a larger industry than movies. Who is to say that this hypothetical virtual reality machine won't turn out the same way?

I think the logical course of action is to evaluate the abilities of a linked series of cross-sections and evaluate if they, as both a group and as individuals, are in-tune with the goals of the omnipotent.

Personally, I find the easiest answer is that we're multi-layered agents. On a base level, the back part of our minds seeks pleasure, but on an intellectual level, our brain is specifically wired to worry about things other than hedonistic pleasure. We derive our motivation from goals regarding hedonistic gain, however are goals can (and usually do) become much more abstract and complex than that. Philosophically speaking, the fact that we are differentiated from that hind part of our brain by non-hedonistic goals is in a way related to what our goals are.... (read more)

Once inside the simulation, imagine that another person came to you from Omega Corporation and offered you a second simulation with an even better hedonistic experience. Then what would you do -- would you take a trial to see if it was really better, or would you just sign right up, right away? I think you would take a trial because you wouldn't want to run the risk of decreasing your already incredible pleasure. I think the same argument could be made for non-simulation denizens looking for an on/off feature to any-such equipment. Then, you could also always be available from square-one to buy equipment from different, potentially even more effective companies, and so on.

5bogdanb
Note that the post mentioned that the OC's offer was for the algorithmically proven most enjoyable life the recipient can live. (And local tradition stipulates that entities called Omega are right when they prove something.) Edit: Which indicates that if your scenario might happen, if the best life you can live includes recursive algorithmic betterment of said life.
1zero_call
Ah, yea, thanks. Guess that's an invalid scenario.

He didn't count on the stupidity of mankind.

"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."