Lotteries & MWI
I haven't been able to find the source of the idea, but I've recently been reminded of:
Lotteries are a way to funnel some money from many of you to a few of you.
This is, of course, based on the Multiple Worlds Interpretation: if the lottery has one-in-a-million odds, then for every million timelines in which you buy a lottery ticket, in one timeline you'll win it. There's a certain amount of friction - it's not a perfect wealth transfer - based on the lottery's odds. But, looked at from this perspective, the question of "should I buy a lottery ticket?" seems like it might be slightly more complicated than "it's a tax on idiots".
But I'm reminded of my current .sig: "Then again, I could be wrong." And even if this is, in fact, a valid viewpoint, it brings up further questions, such as: how can the friction be minimized, and the efficiency of the transfer be maximized? Does deliberately introducing randomness at any point in the process ensure that at least some of your MWI-selves gain a benefit, as opposed to buying a ticket after the numbers have been chosen but before they've been revealed?
How interesting can this idea be made to be?
How much to spend on a high-variance option?
So the jackpot in the Ohio lottery is around 25 million, and the chance of winning it is one in roughly 14 million, with tickets at 1 dollar a piece. It appears to me that roughly a quarter million tickets are sold each drawing; so, supposing you win, the probability of someone else also winning is 1 - (1 - 1/14e6)^{250000}=2%, which does not significantly reduce the expectation value of a ticket. So, unless I'm making a silly mistake somewhere, buying lottery tickets has positive expected value. (I find this counterintuitive; where are all the economists who should be picking up this free money? But I digress.)
I pointed this out to my wife, and said that it might be worth putting a dollar into it; and she very cogently asked, "Then why not make it 100 dollars?" Why not, indeed! Is there any sensible way of deciding how much to put into an option that has a positive expected value, but very low chance of payoff?
Some scary life extension dilemmas
Let's imagine a life extension drug has been discovered. One dose of this drug extends one's life by 49.99 years. This drug also has a mild cumulative effect, if it has been given to someone who has been dosed with it before it will extend their life by 50 years.
Under these constraints the most efficient way to maximize the amount of life extension this drug can produce is to give every dose to one individual. If there was one dose available for all seven-billion people alive on Earth then giving every person one dose would result in a total of 349,930,000,000 years of life gained. If one person was given all the doses a total of 349,999,999,999.99 years of life would be gained. Sharing the life extension drug equally would result in a net loss of almost 70 million years of life. If you're concerned about people's reaction to this policy then we could make it a big lottery, where every person on Earth gets a chance to gamble their dose for a chance at all of them.
Now, one could make certain moral arguments in favor of sharing the drug. I'll get to those later. However, it seems to me that gambling your dose for a chance at all of them isn't rational from a purely self-interested point of view either. You will not win the lottery. Your chances of winning this particular lottery are almost 7,000 times worse than your chances of winning the powerball jackpot. If someone gave me a dose of the drug, and then offered me a chance to gamble in this lottery, I'd accuse them of Pascal's mugging.
Here's an even scarier thought experiment. Imagine we invent the technology for whole brain emulation. Let "x" equal the amount of resources it takes to sustain a WBE through 100 years of life. Let's imagine that with this particular type of technology, it costs 10x to convert a human into a WBE and it costs 100x to sustain a biological human through the course of their natural life. Let's have the cost of making multiple copies of a WBE once they have been converted be close to 0.
Again, under these constraints it seems like the most effective way to maximize the amount of life extension done is to convert one person into a WBE, then kill everyone else and use the resources that were sustaining them to make more WBEs, or extend the life of more WBEs. Again, if we are concerned about people's reaction to this policy we could make it a lottery. And again, if I was given a chance to play in this lottery I would turn it down and consider it a form of Pascal's mugging.
I'm sure that most readers, like myself, would find these policies very objectionable. However, I have trouble finding objections to them from the perspective of classical utilitarianism. Indeed, most people have probably noticed that these scenarios are very similar to Nozick's "utility monster" thought experiment. I have made a list of possible objections to these scenarios that I have been considering:
1. First, let's deal with the unsatisfying practical objections. In the case of the drug example, it seems likely that a more efficient form of life extension will likely be developed in the future. In that case it would be better to give everyone the drug to sustain them until that time. However, this objection, like most practical ones, seems unsatisfying. It seems like there are strong moral objections to not sharing the drug.
Another pragmatic objection is that, in the case of the drug scenario, the lucky winner of the lottery might miss their friends and relatives who have died. And in the WBE scenario it seems like the lottery winner might get lonely being the only person on Earth. But again, this is unsatisfying. If the lottery winner were allowed to share their winnings with their immediate social circle, or if they were a sociopathic loner who cared nothing for others, it still seems bad that they end up killing everyone else on Earth.
2. One could use the classic utilitarian argument in favor of equality: diminishing marginal utility. However, I don't think this works. Humans don't seem to experience diminishing returns from lifespan in the same way they do from wealth. It's absurd to argue that a person who lives to the ripe old age of 60 generates less utility than two people who die at age 30 (all other things being equal). The reason the DMI argument works when arguing for equality of wealth is that people are limited in their ability to get utility from their wealth, because there is only so much time in the day to spend enjoying it. Extended lifespan removes this restriction, making a longer-lived person essentially a utility monster.
3. My intuitions about the lottery could be mistaken. It seems to me that if I was offered the possibility of gambling my dose of life extension drug with just one other person, I still wouldn't do it. If I understand probabilities correctly, then gambling for a chance at living either 0 or 99.99 additional years is equivalent to having a certainty of an additional 49.995 years of life, which is better than the certainty of 49.99 years of life I'd have if I didn't make the gamble. But I still wouldn't do it, partly because I'd be afraid I'd lose and partly because I wouldn't want to kill the person I was gambling with.
So maybe my horror at these scenarios is driven by that same hesitancy. Maybe I just don't understand the probabilities right. But even if that is the case, even if it is rational for me to gamble my dose with just one other person, it doesn't seem like the gambling would scale. I will not win the "lifetime lottery."
4. Finally, we have those moral objections I mentioned earlier. Utilitarianism is a pretty awesome moral theory under most circumstances. However, when it is applied to scenarios involving population growth and scenarios where one individual is vastly better at converting resources into utility than their fellows, it tends to produce very scary results. If we accept the complexity of value thesis (and I think we should), this suggests that there are other moral values that are not salient in the "special case" of scenarios with no population growth or utility monsters, but become relevant in scenarios where there are.
For instance, it may be that prioritarianism is better than pure utilitarianism, and in this case sharing the life extension method might be best because of the benefits it accords the least off. Or it may be (in the case of the WBE example) that having a large number of unique, worthwhile lives in the world is valuable because it produces experiences like love, friendship, and diversity.
My tentative guess at the moment is that there probably are some other moral values that make the scenarios I described morally suboptimal, even though they seem to make sense from a utilitarian perspective. However, I'm interested in what other people think. Maybe I'm missing something really obvious.
EDIT: To make it clear, when I refer to "amount of years added" I am assuming for simplicity's sake that all the years added are years that the person whose life is being extended wants to live and contain a large amount of positive experiences. I'm not saying that lifespan is exactly equivalent to utility. The problem I am trying to resolve is that it seems like the scenarios I've described seem to maximize the number of positive events it is possible for the people in the scenario to experience, even though they involve killing the majority of people involved. I'm not sure "positive experiences" is exactly equivalent to "utility" either, but it's likely a much closer match than lifespan.
A clever argument for buying lottery tickets
I use the phrase 'clever argument' deliberately: I have reached a conclusion that contradicts the usual wisdom around here, and want to check that I didn't make an elementary mistake somewhere.
Consider a lottery ticket that costs $100 for a one-in-ten-thousand chance of winning a million dollars, expected value, $100. I can take this deal or leave it, and of course a realistic ticket actually costs 100+epsilon where epsilon covers the arranger's profit, which is a bad deal.
But now consider this deal in terms of time. Suppose I've got a well-paid job in which it takes me an hour to earn that $100. Suppose further that I work 40 hours a week, 50 weeks a year, and that my living expenses are a modest $40k a year, making my yearly savings $160k. Then, with 4% interest on my $160k yearly, it would take me about 5.5 years to accumulate that million dollars, or 11000 hours. Also note that with these assumptions, once I have my million I don't need to work any more.
It seems to me that, given the assumptions above, I could view the lottery deal as paying one hour of my life for a one-in-ten-thousand chance to win 11000 hours, expected value, 1.1 hours. (Note that leisure hours when young are probably worth more, since you'll be in better health to enjoy it; but this is not necessary to the argument.)
Of course it is possible to adjust the numbers. For example, I could scrimp and save during my working years, and make my living expenses only 20k; in that case it would take me less than 5 years to accumulate the million, and the ticket goes back to being a bad deal. Alternatively, if I spend more than 40k a year, it takes longer to accumulate the million; in this case my standard of living drops when I retire to live off my 4% interest, but the lottery ticket becomes increasingly attractive in terms of hours of life.
I think, and I could be mistaken, that the reason this works is that the rate at which I'm indifferent between money and time changes with my stock of money. Since I work for 8 hours a day at $100 an hour, we can reasonably conclude that I'm *roughly* indifferent between an hour and $100 at my current wealth. But I'm obviously not indifferent to the point that I'd work 24 hours a day for $2400, nor 0 hours a day for $0. Further, once I have accumulated my million dollars (or more generally, enough money to live off the interest), my indifference level becomes much higher - you'd have to offer me way more money per hour to get me to work. Notice that in this case I'm postulating a very sharp dropoff, in that I'm happy to work for $100 an hour until the moment my savings account hits seven digits, and then I am no longer willing to work at all; it seems possible that the argument no longer works if you allow a more gradual change in indifference, but on the other hand "save to X dollars and then retire" doesn't seem like a psychologically unrealistic plan either.
Am I making any obvious mistakes? Of course it may well be the case that the actual lottery tickets for sale in the real world do not match the wages-and-savings situations of real people in such a way that they have positive expected value; that's an empirical question. But it does seem in-principle possible for an epsilon chance at one-over-epsilon dollars paid out right away to be of positive expected value after converting to expected hours of life, even though it's neutral in expected dollars. Am I mistaken?
Edit: Wei Dai found the problem. Briefly, the 100 dollars added to my savings would cut more than 1.1 hours off the time I had to work at the end of the 5.5 years.
On Lottery Tickets
I've often seen the issue of lottery tickets crop up on LessWrong and the consensus seems to be that the behaviour is irrational. It highlights for me a confusion that I've had about what it means for something to be "rational" and I'm seeking clarification. I think it might be useful to break the term down into the distinction I learnt about here, epistemic and instrumental rationality.
Epistemic rationality - This seems to be the most common failure of people who play the lottery. It might be an overt failure of probabilistic reasoning like someone believing their chances of winning to be 50-50 because they can imagine two potential outcome. Maybe they believe that they're "due" to win some money as they commit "the gamblers fallacy". Or it might be a more subtle failure resulting from correct knowledge of probability, but a fundamental inability to represent that number we call "scope insensitivity". I think in the cases where these errors are committed, no-one would argue that these people are being "rational".
However, what if someone had a perfect knowledge of the probabilities involved? If this person bought a lottery ticket would we still consider this a failure of epistemic rationality? You might say that anyone with perfect information of these probabilities would know that lottery tickets are poor financial investments, but we're not talking about instrumental rationality just yet.
Instrumental rationality - Now we're talking about it. The criteria for rationality in this case is, acting in a way that achieves your goals. If your goals in buying a lottery ticket are as one dimensional as making money, then the lottery is a (very) poor investment and I don't think anyone else would disagree. Here is where I start getting confused though, because what happens when a lottery ticket satisfies goals other than financial gain? It is conceivable that I could get more than $5's worth (here meaning my subjective and relative sense of what money is worth) of entertainment out of a $5 lottery ticket. What happens here? I hope you can see the more general problem that arises if you'd answer "It's still instrumentally irrational".
I'm not arguing that the lottery is a good idea or that it's socially desirable. I think that it does tend to drain capitol from the people that can least afford it. If you've argued the idea of the lottery to death, pick a different example, it's the underlying concept I'm trying to tease apart. I suppose it boils down to the idea that if an agent makes no instrumental or epistemic errors of rationality, and buys a lottery ticket, can that be irrational?
Boobies and the lottery
So, in the past I have "donated" boobie pictures to boobiethon, a online fundraising event for breast cancer research. This year I entered into a drawing for a free custom WordPress theme. And I won it!
You might think that I'm lucky, but actually when I enter lotteries I'm very calculating. Once when I was 10, there was a Beanie Baby lottery at the local library. You could see the jars with the tickets in them for each Beanie Baby. There was one Beanie Baby that had very few tickets in the jar, so I bought exactly one ticket for it. And I won the Beanie Baby.
I saw that for this contest, there were 5 WordPress prizes to be awarded total. For other contests there were only one. And I correctly surmised that others would try to win the more desirable prizes. I also submitted 5 pictures of my boobies, and you got one ticket per boobie picture with a maximum of 5 pictures. That's 5 entries. Donating $10 only got you one ticket. And it cost me nothing :).
It's human nature to go for the lottery item of the thing you actually want. I don't do that. I enter the lotteries for things I think no one else wants and that have multiple awards and that have a low-to-no cost. You're never going to win the monetary prize, because the odds are against you. You CAN win things if the odds are in your favor.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)