A Much Better Life?
(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)
The Omega Corporation
Internal Memorandum
To: Omega, CEO
From: Gamma, Vice President, Hedonic Maximization
Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.
Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.
Two Truths and a Lie
Response to Man-with-a-hammer syndrome.
It's been claimed that there is no way to spot Affective Death Spirals, or cultish obsession with the One Big Idea of Everything. I'd like to posit a simple way to spot such error, with the caveat that it may not work for every case.
There's an old game called Two Truths and a Lie. I'd bet almost everyone's heard of it, but I'll summarize it just in case. A person makes three statements, and the other players must guess which of those statements is false. The statement-maker gets points for fooling people, people get points for not being fooled. That's it. I'd like to propose a rationalist's version of this game that should serve as a nifty check on certain Affective Death Spirals, runaway Theory-Of-Everythings, and Perfectly General Explanations. It's almost as simple.
Say you have a theory about human behaviour. Get a friend to do a little research and assert three factual claims about how people behave that your theory would realistically apply to. At least one of these claims must be false. See if you can explain every claim using your theory before learning which one's false.
If you can come up with a convincing explanation for all three statements, you must be very cautious when using your One Theory. If it can explain falsehoods, there's a very high risk you're going to use it to justify whatever prior beliefs you have. Even worse, you may use it to infer facts about the world, even though it is clearly not consistent enough to do so reliably. You must exercise the utmost caution in applying your One Theory, if not abandon reliance on it altogether. If, on the other hand, you can't come up with a convincing way to explain some of the statements, and those turn out to be the false ones, then there's at least a chance you're on to something.
Come to think of it, this is an excellent challenge to any proponent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can discriminate. Just remember to be ruthless when they get it wrong; it doesn't prove their idea is totally wrong, only that reliance upon it would be.
Edited to clarify: My argument is not that one should simply abandon a theory altogether. In some cases, this may be justified, if all the theory has going for it is its predictive power, and you show it lacks that, toss it. But in the case of broad, complex theories that actually can explain many divergent outcomes, this exercise should teach you not to rely on that theory as a means of inference. Yes, you should believe in evolution. No, you shouldn't make broad inferences about human behaviour without any data because they are consistent with evolution, unless your application of the theory of evolution is so precise and well-informed that you can consistently pass the Two-Truths-and-a-Lie Test.
Hypothetical Paradoxes
When we form hypotheticals, they must use entirely consistent and clear language, and avoid hiding complicated operations behind simple assumptions. In particular, with respect to decision theory, hypotheticals must employ a clear and consistent concept of free will, and they must make all information available to the theorizer available to the decider in the question. Failure to do either of these can make a hypothetical meaningless or self-contradictory if properly understood.
Newcomb's problem and the the Smoking Lesion fail to do both. I will argue that hidden assumptions in both problems imply internally contradictory concepts of free will, and thus both hypotheticals are incomprehensible and irrelevant when used to contradict decision theories.
And I'll do it without math or programming! Metatheory is fun.
Utilons vs. Hedons
Related to: Would Your Real Preferences Please Stand Up?
I have to admit, there are a lot of people I don't care about. Comfortably over six billion, I would bet. It's not that I'm a callous person; I simply don't know that many people, and even if I did I hardly have time to process that much information. Every day hundreds of millions of incredibly wonderful and terrible things happen to people out there, and if they didn't, I wouldn't even know it.
On the other hand, my professional goals deal with economics, policy, and improving decision making for the purpose of making millions of people I'll never meet happier. Their happiness does not affect my experience of life one bit, but I believe it's a good thing and I plan to work hard to figure out how to create more happiness.
This underscores an essential distinction in understanding any utilitarian viewpoint: the difference between experience and values. One can value unweighted total utility. One cannot experience unweighted total utility. It will always hurt more if a friend or loved one dies than if someone you never knew in a place you never heard of dies. I would be truly amazed to meet someone who is an exception to this rule and is not an absolute stoic. Your experiential utility function may have coefficients for other people's happiness (or at least your perception of such), but there's no way it has an identical coefficient for everyone everywhere, unless that coefficient is zero. On the other hand, you probably care in an abstract way about whether people you don't know die or are enslaved or imprisoned, and may even contribute some money or effort to prevent such from happening. I'm going to use "utilons" to refer to value utility units and "hedons" to refer to experiential utility units; I'll demonstrate that this is a meaningful distinction shortly, and that we value utilons over hedons explains much of our moral reasoning appearing to fail.
Not Technically Lying
I'm sorry I took so long to post this. My computer broke a little while ago. I promise this will be relevant later.
A surgeon has to perform emergency surgery on a patient. No painkillers of any kind are available. The surgeon takes an inert saline IV and hooks it up to the patient, hoping that the illusion of extra treatment will make the patient more comfortable. The patient asks, "What's in that?" The doctor has a few options:
- "It's a saline IV. It shouldn't do anything itself, but if you believe it's a painkiller, it'll make this less painful.
- "Morphine."
- "The strongest painkiller I have."
-The first explanation is not only true, but maximizes the patient's understanding of the world.
-The second is obviously a lie, though, in this case, it is a lie with a clear intended positive effect: if the patient thinks he's getting morphine, then, due to the placebo effect, there is a very real chance he will experience less subjective pain.
-The third is, in a sense, both true and a lie. It is technically true. However, it's somewhat arbitrary; the doctor could have easily have said "It's the weakest painkiller I have," or "It's the strongest sedative I have," or any other number of technically true but misleading statements. This statement is clearly intended to mislead the hearer into thinking it is a potent painkiller; it promotes false beliefs while not quite being a false statement. It's Not Technically Lying. It seems that it deserves most, if not almost all, the disapproval that actually lying does; the truth does not save it. Because language does not specify single, clear meanings we can often use language where the obvious meaning is false and the non-obvious true, intentionally promoting false beliefs without false statements.
Religion, Mystery, and Warm, Soft Fuzzies
Reaction to: Yudkowsky and Frank on Religious Experience, Yudkowksy and Frank On Religious Experience Pt 2, A Parable On Obsolete Ideologies
Frank's point got rather lost in all this. It seems to be quite simple: there's a warm fuzziness to life that science just doesn't seem to get, and some religious artwork touches on and stimulates this warm fuzziness, and hence is of value.1 Moreover, understanding this point seems rather important to being able to spread an ideology.
The main problem is viewing this warm fuzziness as a "mystery." This warm fuzziness, as an experience, is a reality. It's part of that set of things that doesn't go away no matter what you say or think about them. Women (or men) will still be alluring, food will still be delicious, and Michaelangelo's David will still be beautiful, no matter how well you describe these phenomenon. The view that shattering mysteries reduces their value is very much a result of religion trying to protect itself. EY is probably correct that science will one day destroy this mystery as it has so many others, but because it is an "experience we can't clearly describe" rather than an actual "mystery," the experience will remain. The argument is with the description, not the experience; the experience is real, and experiences of its nature are totally desirable.
The second, sub-point: Frank thinks that certain religious stories and artwork may be of artistic value. The selection of the story of Job is unfortunate, but both speakers value it for the same reason: its truth. One sees it as true (and inspiring) and likes it, the other sees it as false (and insidious) and hates it. I think both agree that if you put it on the shelf next to Tolkien, and rational atheists still buy it and enjoy it, hey, good for Job. And if not, well, throw it out with the rest of the trash.
Masochism vs. Self-defeat
Follow up to: Is masochism necessary?, Stuck in the middle with Bruce
Masochism has two very different meanings: enjoyment of pain, and pursuit (not enjoyment) of suffering.
As a rather blunt example of this distinction, consider a sexual masochist. If his girlfriend ties him up and beats him, he'll experience pain, but he certainly won't suffer; he'll probably enjoy himself immensely. Put someone with vanilla sexual tastes in his place, and he would experience both pain and suffering.
Bruce-like behaviour is best understood as pursuit of suffering. People undermine themselves or set themselves up to lose. They may do it so that they have a comfortable excuse, or because they are used to failing and afraid of being happy, or for many other reasons. Most of us do this to some degree, however slight, and it's something we want to avoid.1 Pursuit of suffering, quite simply, gets in the way of winning, and, much like akratic behaviour, it is something that we should try desperately to find and destroy, because we should be happier without it.
This is very, very different from enjoyment of pain. If you like getting beaten up, or spicy foods, or running marathons, this has no effect on whether you win; these become a kind of winning. The fact that these activities cause suffering in some people is wholly irrelevant. For those who enjoy them, they create happiness, and obtaining them is, in a sense, a form of winning. Because of this, there's no reason to try to catch ourselves engaging in them or to worry about engaging in them less. It does not seem like people would be happier if they lost these prefereces.2
Indeed, given that they require some level of initial exposure, and (in the sexual case) have strong social taboos against them, it seems quite likely that masochistic behaviour isn't engaged in enough, though I admit I may be going too far.
Edit: As a point of clarification, "Bruce-like" behaviour may be overbroad. Some people set themselves up to lose because, for whatever reason, they genuinely like losing. That isn't pursuit of suffering, because there's no suffering. However, we do sometimes undermine ourselves when we want to win. The precise cause of this is, for my purposes, immaterial. This is what I'm referring to by "pursuit of suffering," and my entire point is that it is quite distinct from enjoyment or pursuit of pain, and that this difference is worth noticing.
A proof of the utilitarian benefit of sadism is left to the reader, or as the topic for a follow-up post if people like this one.
1 - If we actually enjoy failure, such that presented with the simple choice of win/lose, we repeatedly chose lose, that's a separate subject and would fall under another description, like "enjoyment of failure." This is something that one might be happier without, but that's really another issue for another post.
2- This is not to say that some people shouldn't engage in them less. There are people who engage in self-destructive behaviour. Some use sex as a means of escape. Some anorexics exercise compulsively. But the fact that these can be unhealthy in specific circumstances is of no relevance to the greater population that enjoys them responsibly.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)