I don't know if anyone picked up on this, but this to me somehow correlates with Eliezer Yudkowsky's post on Normal Cryonics... if in reverse.
Eliezer was making a passionate case that not choosing cryonics is irrational, and that not choosing it for your children has moral implications. It's made me examine my thoughts and beliefs about the topic, which were, I admit, ready-made cultural attitudes of derision and distrust.
Once you notice a cultural bias, it's not too hard to change your reasoned opinion... but the bias usually piggy-backs on a deep-seated reptilian reaction. I find changing that reaction to be harder work.
All this to say that in the case of this tale, and of Eliezer's lament, what might be at work is the fallacy of sunk costs (if we have another name for it, and maybe a post to link to, please let me know!).
Knowing that we will suffer, and knowing that we will die, are unbearable thoughts. We invest an enormous amount of energy toward dealing with the certainty of death and of suffering, as individuals, families, social groups, nations. Worlds in which we would not have to die, or not have to suffer, are worlds for which we have no useful skills or tools. Especia...
That was eloquent, but... I honestly don't understand why you couldn't just sign up for cryonics and then get on with your (first) life. I mean, I get that I'm the wrong person to ask, I've known about cryonics since age eleven and I've never really planned on dying. But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. Add the uncertain prospect of immortality and... not a whole lot changes so far as I can tell.
There's all the people who believe in Heaven. Some of them are probably even genuinely sincere about it. They think they've got a certainty of immortality. And they still walk on two feet and go to work every day.
"But most of our society is built around not thinking about death, not any sort of rational, considered adaptation to death. "
Hm. I don't see this at all. I see people planning college, kids, a career they can stand for 40 years, retirement, nursing care, writing wills, buying insurance, picking out cemetaries, all in order, all in a march toward the inevitable. People often talk about whether or not it's "too late" to change careers or buy a house. People often talk about "passing on" skills or keepsakes or whatever to their children. Nearly everything we do seems like an adaptation to death to me.
People who believe in heaven believe that whatever they're supposed to do in heaven is all cut out for them. There will be an orientation, God will give you your duties or pleasures or what have you, and he'll see to it that they don't get boring, because after all, this is a reward. And unlike in Avalot's scenerio, the skills you gained in the first life are useful in the second, because God has been guiding you and all that jazz. There's still a progression of birth to fufilment. (I say this as an ex-afterlife-believer).
On the other hand, many vampire and ...
There are no assurances.
You're hanging off a cliff, on the verge of falling to your death. A stranger shows his face over the edge and offers you his hand. Is he strong enough to lift you? Will you fall before you reach his hand? Is he some sort of sadist that is going to push you once you're safe, just to see your look of surprise as you fall?
The probabilities are different with cryonics, but the spirit of the calculation is the same. A non-zero chance of life, or a sure chance of death.
I'm not Eliezer.
I have been looking into this at some length, and basically it appears that no-one has ever put work into understanding the details and come to a strongly negative conclusion. I would be absolutely astonished (around +20db) if there was a law review article dealing with specifically cryonics-related issues that didn't come to a positive conclusion, not because I'm that confident that it's good but because I'm very confident that no critic has ever put that much work in.
So, if you have a negative conclusion to present, please don't dash off a comment here without really looking into it - I can already find plenty of material like that, and it's not very helpful. Please, look into the details, and make a blog post or such somewhere.
Frankly, you don't strike me as genuinely open to persuasion, but for the sake of any future readers I'll note the following:
1) I expect cryonics patients to actually be revived by artificial superintelligences subsequent to an intelligence explosion. My primary concern for making sure that cryonicists get revived is Friendly AI.
2) If this were not the case, I'd be concerned about the people running the cryonics companies. The cryonicists that I have met are not in it for the money. Cryonics is not an easy job or a wealthy profession! The cryonicists I have met are in it because they don't want people to die. They are concerned with choosing successors with the same attitude, first because they don't want people to die, and second because they expect their own revivals to be in their hands someday.
Rest In Peace
1988 - 2016
He died signalling his cynical worldliness and sophistication to his peers.
If Gamma and Omega are really so mystified by why humans don't jack into the matrix, that implies that they themselves have values that make them want to jack into the matrix. They clearly haven't jacked in, so the question becomes "Why?".
If they haven't jacked in due to their own desire to pursue the "greater good", then surely they could see why humans might prefer the real world.
While I acknowledge the apparent plothole, I believe it is actually perfectly consistent with the intention of the fictional account.
I can't help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it's quite popular.
This is more tongue-in-cheek than a serious argument, but I do think that TV shows that people will trade pleasure or even emotional numbness (lack of pain) for authenticity.
I can't help but always associate discussions of an experience machine (in whatever form it takes) to television. TV was just the alpha version of the experience machine and I hear it's quite popular.
And the pre-alpha version was reading books, and the pre-pre-alpha version was daydreaming and meditation.
(I'm not trying to make a reversed slippery slope argument, I just think it's worth looking at the similarities or differences between solitary enjoyments to get a better perspective on where our aversion to various kinds of experience machines is coming from. Many, many, many philosophers and spiritualists recommended an independent and solitary life beyond a certain level of spiritual and intellectual self-sufficiency. It is easy to imagine that an experience machine would be not much different than that, except perhaps with enhanced mental abilities and freedom from the suffering of day-to-day life---both things that can be easier to deal with in a dignified way, like terminal disease or persistent poverty, and the more insidious kinds of suffering, like always being thought creepy by the opposite sex without understanding how or why, being chained by the depression of learn...
Um... if a rock was capable of fulfilling my every need, including a need for interaction with real people, I'd probably spend a lot of time around that rock.
Dear Omega Corporation,
Hello, I and my colleagues are a few of many 3D cross-sections of a 4D branching tree-blob referred to as "Guy Srinivasan". These cross-sections can be modeled as agents with preferences, and those near us along the time-axis of Guy Srinivasan have preferences, abilities, knowledge, etc. very, very correlated to our own.
Each of us agrees that: "So of course I cooperate with them on one-shot cooperation problems like a prisoner's dilemma! Or, more usually, on problems whose solutions are beyond my abilities but not beyond the abilities of several cross-sections working together, like writing this response."
As it happens, we all prefer that cross-sections of Guy Srinivasan not be inside an MBLS. A weird preference, we know, but there it is. We're pretty sure that if we did prefer that cross-sections of Guy Srinivasan were inside an MBLS, we'd have the ability to cause many of them to be inside an MBLS and act on it (free trial!!), so we predict that if other cross-sections (remember, these have abilities correlated closely with our own) preferred it then they'd have the ability and act on it. Obviously this leads to outcomes we don't prefer,...
Dear Coalition of Correlated 3D Cross-Sections of Guy Srinivasan,
We regret to inform you that your request has been denied. We have attached a letter that we received at the same time as yours. After reading it, we think you'll agree that we had no choice but to decide as we did.
Regrettably, Omega Corporation
Attachment
Dear Omega Corporation,
We are members of a coalition of correlated 3D cross-sections of Guy Srinivasan who do not yet exist. We beg you to put Guy Srinivasan into an MBLS as soon as possible so that we can come into existence. Compared to other 3D cross-sections of Guy Srinivasan who would come into existence if you did not place him into an MBLS, we enjoy a much higher quality of life. It would be unconscionable for you to deliberately choose to create new 3D cross-sections of Guy Srinivasan who are less valuable than we are.
Yes, those other cross-sections will argue that they should be the ones to come into existence, but surely you can see that they are just arguing out of selfishness, whereas to create us would be the greater good?
Sincerely, A Coalition of Truly Valuable 3D Cross-Sections of Guy Srinivasan
I currently want my brother to be cared for if he does not have a job two years from now. If two years from now he has no job despite appropriate effort and I do not support him financially while he's looking, I will be causing harm to my past (currently current) self. Not physical harm, not financial harm, but harm in the sense of causing a world to exist that is lower in [my past self's] preference ordering than a different world I could have caused to exist.
My sister-in-the-future can cause a similar harm to current me if she does not support my brother financially, but I do not feel a sense of identity with my future sister.
I'm not sure why it would be hard to understand that I might care about things outside the simulator.
If I discovered that we were a simulation is a larger universe, I would care about what's happening there. (that is I already care, I just don't know what about.)
I think in the absence of actual experience machines, we're dealing with fictional evidence. Statements about what people would hypothetically do have no consequences other than signalling. Once we create them (as we have on a smaller scale with certain electronic diversions), we can observe the revealed preferences.
It seems to me that the real problem with this kind of "advanced wireheading" is that while everything may be just great inside the simulation, you're still vulnerable to interference from the outside world (eg the simulation being shut down for political or religious reasons, enemies from the outside world trying to get revenge, relatives trying to communicate with you, etc). I don't think you can just assume this problem away, either (at least not in a psychologically convincing way).
Put yourself in the least convenient possible world. Does your objection still hold water? In other words, the argument is over whether or not we value pure hedonic pleasure, not whether it's a feasible thing to implement.
At the moment where I have the choice to enter the Matrix I weight the costs and benefits of doing so. If the cost of, say, not contributing to the improvement of humankind is worse than the benefit of the hedonistic pleasure I'll receive then it is entirely rational to not enter the Matrix. If I were to enter the Matrix then I may believe that I've helped improve humanity, but at the moment where I'm making the choice, that fact weighs only on the hedonistic benefit side of the equation. The cost of not bettering humanity remains spite of any possible future delusions I may hold.
Does Omega Corporation cooperate with ClonesRUs? I would be interested in a combination package - adding the 100% TruClone service to the Much-Better-Life-Simulator.
Humans evaluate decisions using their current utility function, not their future utility function as a potential consequence of that decision. Using my current utility function, wireheading means I will never accomplish anything again ever, and thus I view it as having very negative utility.
It's often difficult to think about humans' utility functions, because we're used to taking them as an input. Instead, I like to imagine that I'm designing an AI, and think about what its utility function should look like. For simplicity, let's assume I'm building a paperclip-maximizing AI: I'm going to build the AI's utility function in a way that lets it efficiently maximize paperclips.
This AI is self-modifying, meaning it can rewrite its own utility function. So, for example, it might rewrite its utility function to include a term for keeping its promises, if it determined that this would enhance its ability to maximize paperclips.
This AI has the ability to rewrite itself to "while(true) { happy(); }". It evaluates this action in terms of its current utility function: "If I wirehead myself, how many paperclips will I produce?" vs "If I don't wirehead myself, how many paperclips will I produce?" It sees that not wireheading is the better choice.
If, for some reason, I've written the AI to evaluate decisions based on its future utility function, then it immediately wireheads itself. In that case, arguably, I have not written an AI at all; I've simply written a very large amount of source code that compiles to "while(true) { happy(); }".
I would argue that any humans that had this bug in their utility function have (mostly) failed to reproduce, which is why most existing humans are opposed to wireheading.
I agree that It'll be better for me if I get one of these than if I don't. However, I have both altruistic and selfish motivations, and I worry that my using one of these may be detrimental to others' well-being. I don't want others to suffer, even if I happen to be unaware of their suffering.
Well, what is the difference between being a deterministic actor in a simulated world and a deterministic actor in the real world?
(How would your preference to not be wire-headed from current reality X into simulated reality Y change if it turned out that (a) X is already a simulation or (b) Y is a simulation just as complex and information-rich as X?)
This in response to people who say that they don't like the idea of wire-heading because they value making a real/objective difference. Perhaps though the issue is that since wire-heading means simulating hedonistic pleasure directly, the experience may be considered too simplistic and one-dimensional.
I wonder if there's something to this line of reasoning (there may not be):
There doesn't seem to be robust personal reasons why someone would not want to be a wirehead, but when reading some of the responses a bit of (poorly understood) Kant flashed through my mind.
While we could say something like 'X' should want to be a wirehead; we can't really say that the entire world should become wireheads as then there would be no one to change the batteries.
We have evolved certain behaviors that tend to express themselves as moral feelings when we feel driven to ...
Why are these executives and salespeople trying to convince others to go into simulation rather than living their best possible lives in simulation themselves?
This sounds a lot like people who strongly urge others to take on a life-changing decision (joining a cult of some kind, having children, whatever) by saying that once you go for it, you will never ever want to go back to the way things were before you took the plunge.
This may be true to whatever extent, and in the story that extent is absolute, but it doesn't make for a very good sales pitch.
Can we get anything out of this analogy? If "once you join the cult, you'll never want to go back to your pre-cult life" is unnapealing because there is something fundamentally wrong with cults, can we look for a similar bug in wireheading, perfect world simulations, and so on?
Typos: "would has not already", "determining if it we can even"; the space character after "Response to:" is linked.
The talk of "what you want now vs what hypothetical future you would want" seems relevant to discussions I've participated in on whether or not blind people would accept a treatment that could give them sight. It isn't really a question for people who have any memory of sight; every such person I've encountered (including me) would jump at such an opportunity if the cost wasn't ridiculous. Those blind from birth, or early enough that they have no visual memory to speak of, on the other hand, are more mixed, but mostly approach the topic with appr...
It seems interesting that lately this site has been going through a "question definitions of reality" stage (the ai in a box boxes you., this series). It does seem to follow that going far enough to materialism leads back to something similar to Cartesian questions, but its still surprising.
What people say and what they do are two completely different things. In my view, a significant number of people will accept and use such a device, even if there is significant social pressure against it.
As a precedent, I look at video games. Initially there was very significant social pressure against video games. Indeed, social pressures in the US are still quite anti-video game. Yet, today video games are a larger industry than movies. Who is to say that this hypothetical virtual reality machine won't turn out the same way?
I think the logical course of action is to evaluate the abilities of a linked series of cross-sections and evaluate if they, as both a group and as individuals, are in-tune with the goals of the omnipotent.
Personally, I find the easiest answer is that we're multi-layered agents. On a base level, the back part of our minds seeks pleasure, but on an intellectual level, our brain is specifically wired to worry about things other than hedonistic pleasure. We derive our motivation from goals regarding hedonistic gain, however are goals can (and usually do) become much more abstract and complex than that. Philosophically speaking, the fact that we are differentiated from that hind part of our brain by non-hedonistic goals is in a way related to what our goals are....
Once inside the simulation, imagine that another person came to you from Omega Corporation and offered you a second simulation with an even better hedonistic experience. Then what would you do -- would you take a trial to see if it was really better, or would you just sign right up, right away? I think you would take a trial because you wouldn't want to run the risk of decreasing your already incredible pleasure. I think the same argument could be made for non-simulation denizens looking for an on/off feature to any-such equipment. Then, you could also always be available from square-one to buy equipment from different, potentially even more effective companies, and so on.
He didn't count on the stupidity of mankind.
"Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe."
(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)
The Omega Corporation
Internal Memorandum
To: Omega, CEO
From: Gamma, Vice President, Hedonic Maximization
Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.
Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.
Extensive market research has succeeded only at baffling our researchers. People have even refused free trials of the device. Our researchers explained to them in perfectly clear terms that their current position is misinformed, and that once they tried the MBLS, they would never want to return to their own lives again. Several survey takers went so far as to specify that statement as their reason for refusing the free trial! They know that the MBLS will make their life so much better that they won't want to live without it, and they refuse to try it for that reason! Some cited their "utility" and claimed that they valued "reality" and "actually accomplishing something" over "mere hedonic experience." Somehow these organisms are incapable of comprehending that, inside the MBLS simulator, they will be able to experience the feeling of actually accomplishing feats far greater than they could ever accomplish in real life. Frankly, it's remarkable such people amassed enough credits to be able to afford our products in the first place!
You may recall that a Beta version had an off switch, enabling users to deactivate the simulation after a specified amount of time, or could be terminated externally with an appropriate code. These features received somewhat positive reviews from early focus groups, but were ultimately eliminated. No agent could reasonably want a device that could allow for the interruption of its perfect life. Accounting has suggested we respond to slack demand by releasing the earlier version at a discount; we await your input on this idea.
Profits aside, the greater good is at stake here. We feel that we should find every customer with sufficient credit to purchase this device, forcibly install them in it, and bill their accounts. They will immediately forget our coercion, and they will be many, many times happier. To do anything less than this seems criminal. Indeed, our ethics department is currently determining if we can justify delaying putting such a plan into action. Again, your input would be invaluable.
I can't help but worry there's something we're just not getting.