The classic criticism of the lottery is that the people who play are the ones who can least afford to lose; that the lottery is a sink of money, draining wealth from those who most need it. Some lottery advocates, and even some commentors on Overcoming Bias, have tried to defend lottery-ticket buying as a rational purchase of fantasy—paying a dollar for a day’s worth of pleasant anticipation, imagining yourself as a millionaire.
But consider exactly what this implies. It would mean that you’re occupying your valuable brain with a fantasy whose real probability is nearly zero—a tiny line of likelihood which you, yourself, can do nothing to realize. The lottery balls will decide your future. The fantasy is of wealth that arrives without effort—without conscientiousness, learning, charisma, or even patience.1
Which makes the lottery another kind of sink: a sink of emotional energy. It encourages people to invest their dreams, their hopes for a better future, into an infinitesimal probability. If not for the lottery, maybe they would fantasize about going to technical school, or opening their own business, or getting a promotion at work—things they might be able to actually do, hopes that would make them want to become stronger. Their dreaming brains might, in the 20th visualization of the pleasant fantasy, notice a way to really do it. Isn’t that what dreams and brains are for? But how can such reality-limited fare compete with the artificially sweetened prospect of instant wealth—not after herding a dot-com startup through to IPO, but on Tuesday?
Seriously, why can’t we just say that buying lottery tickets is stupid? Human beings are stupid, from time to time—it shouldn’t be so surprising a hypothesis.
Unsurprisingly, the human brain doesn’t do 64-bit floating-point arithmetic, and it can’t devalue the emotional force of a pleasant anticipation by a factor of 0.00000001 without dropping the line of reasoning entirely. Unsurprisingly, many people don’t realize that a numerical calculation of expected utility ought to override or replace their imprecise financial instincts, and instead treat the calculation as merely one argument to be balanced against their pleasant anticipations—an emotionally weak argument, since it’s made up of mere squiggles on paper, instead of visions of fabulous wealth.
This seems sufficient to explain the popularity of lotteries. Why do so many arguers feel impelled to defend this classic form of self-destruction?2
The process of overcoming bias requires (1) first noticing the bias, (2) analyzing the bias in detail, (3) deciding that the bias is bad, (4) figuring out a workaround, and then (5) implementing it. It’s unfortunate how many people get through steps 1 and 2 and then bog down in step 3, which by rights should be the easiest of the five. Biases are lemons, not lemonade, and we shouldn’t try to make lemonade out of them—just burn those lemons down.
1See Po Bronson, “How Not to Talk to Your Kids,” New York, 2007, http://nymag.com/news/features/27840.
2See “Debiasing as Non-Self-Destruction.” http://lesswrong.com/lw/hf/debiasing_as_nonselfdestruction.
Having only just caught up with the Paris Hilton thread I've only just realised what Eliezar is trying to do and am suitably humbled. However, I choose the lottery thread to point up the unimaginable orders of magnitude of difference between the significance of trying to devise an optimal morality for the engineered intelligence which will supercede us (and yes, I do know that the etymology of supercede does include our death), and the significance me and my better half buying a lotto ticket. 'Wasted hope' implies that we are to some extent free agents. Before even going there, Eliezar, you need to define your position on free will vs determinism & Chalmers vs Dennett. No doubt you have, in which case please excuse me and point me there. To answer the lotto question, just look to how your post singularity AI will handle frustration, disappointment, and low self esteem. I don't have the math but I do have the questions. Our ability to handle our own dysfunctions is not even in its infancy. Our psychological models are a shambles (just look at the Tree of Knowledge as a smile- or tear- inducing example of how not to get there). Our therapeutic methodologies are at the shamanism 1.0.1 stage . And yet we hope to legislate for the intelligence that will replace us ? Call that a bias, a triumph of hope over experience ! Next step, the Paris Hilton discussion on values was suitably learned, but however high you get in meta- meta- meta- values theory, there is an irreducible 'my values are what seems right to me'. Your post-singularity IA will have its own, unless it is very severely constrained (but then I guess it wouldn't be post-singularity, in which case we should all go to the beach and shut up, because nothing we could do or say will make any difference). That's why I like Ian McDonald's book, it focusses on that polarity. BTW, I agree with the poster who postulated creepiness as a value. Cryogenics is definitely creepy. Also, please get in touch when you've produced an AI program to match the smile on my wife's face when she comes in with a Lotto ticket and says 'this is for you', and the effect it has on me even though I know all the probability statistics. Tara.