The lottery came up in a recent comment, with the claim that the expected return is negative - and the implicit conclusion that it's irrational to play the lottery.  So I will explain why this is not the case.

It's convenient to reason using units of equivalent value.  Dollars, for instance.  A utility function u(U) maps some bag of goods U (which might be dollars) into a value or ranking.  In general, u(kn) / u(n) < k.  This is because a utility function is (typically) defined in terms of marginal utility.  The marginal utility to you of your first dollar is much greater than the marginal utility to you of your 1,000,000th dollar.  It increases the possible actions available to you much more than your 1,000,000th dollar does.

Utility functions are sigmoidal.  A serviceable utility function over one dimension might be u(U) = k * ([1 / (1 + e-U)] - .5).  It's steep around U=0, and shallow for U >> 0 and U << 0.

Sounds like I'm making a dry, academic mathematical point, doesn't it?  But it's not academic.  It's crucial.  Because neglecting this point leads us to make elementary errors such as asserting that it isn't rational to play the lottery or become addicted to crack cocaine.

For someone with $ << 0, the marginal utility of $5 to them is minimal.  They're probably never going to get out of debt; someone has a lien on their income and it's going to be taken from them anyway; and if they're $5 richer it might mean they'll lose $4 in government benefits.  It can be perfectly reasonable, in terms of expected utility, for them to play the lottery.

Not in terms of expected dollars.  Dollars are the input to the utility function.

Rationally, you might expect that u(U) = 0 for all U < 0.  Because you can always kill yourself.  Once your life is so bad that you'd like to kill yourself, it could make perfect sense to play the lottery, if you thought that winning it would help.  Or to take crack cocaine, if it gives you a few short intervals over the next year that are worth living.

Why is this important?

Because we look at poor folks playing the lottery, and taking crack cocaine, and we laugh at them and say, Those fools don't deserve our help if they're going to make such stupid decisions.

When in reality, some of them may be making <EDITED> much more rational decisions than we think. </EDITED>

If that doesn't give you a chill, you don't understand.

 

(I changed the penultimate line in response to numerous comments indicating that the commenters reserve the word "rational" for the unobtainable goal of perfect utility maximization.  I note that such a definition defines itself into being irrational, since it is almost certainly not the best possible definition.)

New Comment
100 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

1) Lottery tickets are bought using income that is after tax, after debt, and after loss of government benefits.

2) Many people buy more than one lottery ticket; they spend hundreds of dollars per year or more.

3) There was a period during which poor folks had reason to legitimately distrust banks and played the illegal numbers game as a sort of stochastic savings mechanism, up to 600-to-1 payouts on 1000-to-1 odds, which meant they did get large units of cash occasionally. Post-FDIC this is no longer a realistic motive and the odds on the government lotteries are worse.

4) Yes, your life can suck, yes, the lottery can seem like the only way out. But this is not a reasoned decision based on having literally no better life-improving use for hundreds of after-tax dollars. It is based on the lure and temptation of easy money to a mind that can't multiply.

5) Those who buy tickets will not win the lottery. If you think the chance is worth talking about, you've fallen prey to the fallacy yourself. In ordinary conversation odds of one in a hundred million of being wrong would correspond to a Godlike level of calibrated confidence. Therefore I say simply, "You WILL NOT win th... (read more)

5PhilGoetz
Point #1 is wrong. Point #2 is consistent with my idea. Point #4 is not a point, but a conclusion presented as a point. Point #5 requires reformulating rationalism around something other than expected utility in order for it to be right. Point #3 is interesting. If one person (me, for instance), observes a phenomenon, and then proposes a theory that partly explains that phenomenon, and gives reasons why the assumptions required are valid, and shows that the proposed mechanism has the proposed results given the assumptions; and gives a testable hypothesis and shows that his theory passes at least that test, Then it is unhelpful to "critique" the theory by insisting that some other mechanism that also has the same effect must account for all of the effect. Can we all please be very careful about making arguments of the form (A=>B, C => B, C) => not(A) ? (You can use such an argument to say that if A=>B, C => B, then C diminishes the evidence for A. That is most useful when B has a binary truth-value. When B is assigned not a truth-value, but a number indicating how often B goes on in the real world; and you have no quantitative knowledge of how much of the observed B C accounts for; then A=>B, C => B, C just diminishes the expected proportion of the observed B that is accounted for by A. You can't leap to the conclusion not(A).)
5Eliezer Yudkowsky
Please amplify on "#1 is wrong". This is a very common conversation in science. Some of it is conducted improperly, which is annoying, but I would hardly categorize the whole thing as unhelpful. In particular, the "improper" critiques usually consist of hypothesizing more and more elaborate hidden mechanisms with no evidence to support them as alternatives. But we know hyperbolic discounting exists. We know that people are insensitive to the smallness of small probabilities. When the other mechanism is nailed down by other evidence (hyperbolic discounting (for crack), or neglect of the tinyness of tiny odds (for lottery tickets)) and the new mechanism is not known, then A->B, C->B, C steals the evidence that C provides for A. You need to provide new D with A->D, C!->B. Where the implication from C to B is imperfect then B goes on providing some trickle of evidence to A but if the implications are equally strong then the trickle does not distinguish between A and C as opposed to other hypotheses and the prior odds win out. In particular, the notion that ticket buyers really are making an expected utility calculation says that decreasing the odds of a lottery win by a factor of 10 (while perhaps multiplying the number of tickets sold by 10 and keeping the price constant, so that the number of lottery winners reported in the media is constant), will decrease the price they are willing to pay for a given lottery ticket by a factor of 10. Are you willing to make that prediction? I'd expect ticket sales to remain pretty much the same.
1PhilGoetz
If lottery tickets were bought after paying off debts and after loss of government benefits, no one who was in debt, or who was receiving government benefits, could buy lottery tickets. Unless I misunderstand. I tried to explain in my previous comment why I think this is the wrong way of looking at it. You're speaking as if B is a proposition with a truth-value that has a single cause. However, I think my explanation was not quite right either. The weakest, most obviously true reply is that this is not a Boolean net; B does not have a single cause; and A => B and C => B can both be having an effect. It's even possible, in the real-valued non-Boolean world, to have (remember this is not Boolean; this is more like a metabolic network) A > 0, C > 0, A => B, C => B, B < 0. A reply that is a little stronger ( = has more consequences), and a little less clearly correct, is that your argument for C => B is not as good as my argument for A => B, so who's stealing whose evidence? The strongest, least-clear reply is that we have priors in favor of both A => B and C => B. Because they're both just-so stories, and we have no quantitative expectations of how much of an increase in B either would provide; and, unlike when B is a truth-value, there's no upper limit on how large B can get; A or C can't steal much evidence from each other without some quantitative prediction. All the info you have is that A and C would both make B > 0, and B > 0. If C accounts for x points of B, and B = x + y, then this knowledge can increase the probability of A. C, C => B diminishes the probability of A in the absence of knowledge about the value of B and the value of B explained by C, but by so little compared to the priors, that presenting it as an argument against an argument from principles is misleading.
0PhilGoetz
That's an interesting point. * If, as I said in my post, it is possible for all situations in which utility < 0 to be considered equivalent because one can commit suicide, then you would predict that ticket sales would remain nearly the same. * I don't claim that they are all making a good utility calculation. But who does? I claim that more of their behavior is attributable to utility calculations than is commonly believed.
3Eliezer Yudkowsky
On this theory the "rational poor" should not spend money on anything except lottery tickets, then commit suicide.
3bogdanb
About point 5: I've encountered this idea quite often. And I agree, but only if “win the lottery” means winning the big prize. I've never seen the consideration* that, in addition to the one (or, statistically, fewer than one) “jackpot”, there are in most lotteries relatively large numbers of consolation prizes. (*: this doesn't mean that it's absent; it may be included in the general calculations, but I've never seen the point being made explicit.) In terms of expected dollars this part doesn't change much (it's still sub-unitary, since lotteries don't generally go bankrupt), but in terms of expected utility as discussed in the post, and in particular with respects with your fifth point, it seems very significant. On the monetary side, even payoffs of a few hundred dollars may have highly “distorted” utilities for some persons. And on the epistemological side, probabilities of one in a few thousand (even more for lower payoffs) are much more relevant than one in a hundred million. That doesn't mean that lottery players actually do the math—or base their decisions on more than intuition—but at such relatively lower levels of uncertainty it's not as obvious that the concept is completely invalid. Also, I expect there would be many takers for any winner-takes-all lottery, too, but I'd be surprised if the number wasn't significantly lower, all else being equal.

FWIW, Charles Karelis makes this argument extensively in his book The Persistence of Poverty.

While it's plausible that utility functions are sigmoidal, it's not obviously true, and it's certainly not true of many of the utility functions generally used in the literature.

Moreover, even if experienced-utility (e.g. emotional state) functions are sigmoidal, that doesn't imply that decision-utility functions are, except in the special case that individuals are risk-neutral with respect to experienced utility. More generally than that, a consistent decision-utility function can be any positive monotonic transform of an experienced utility function.

EDIT: I should have added that the implication of that last point is that you can rationalize a lot of behavior just by assuming a particular level of risk preference. You can't rationalize literally anything (consistency is still a constraint), but you can rationalize a lot. All of this makes it especially important to argue explicitly for the particular form of happiness/utility function you're relying on.

(EDITED again to hopefully overcome ambiguities in the way different people are using the terms happiness and utility.)

IAWY right up to the penultimate sentence. Humans continuously modify their utility functions to maintain a steady level of happiness. A change in your utility function's input--like winning the lottery, or suffering a permanent injury--has only a temporary effect. The day you collect your winnings, you're super-happy; a year later, you're no happier than you were when you bought the ticket. If you're considering picking up a crack habit, you had better realize that in a year your baseline happiness will be no higher than it is now, despite all the thi... (read more)

-1PhilGoetz
That does introduce another level of complication. Utility functions assume a static model. They are not happiness functions. We talk about maximizing utility all the time on LW, when really we want to maximize happiness. Maximizing your happiness is a higher level of rationality than maximizing your utility. I think it's still okay to sometimes define "rational" as maximizing expected utility. (I don't think foreign aid has anything to do with the delta-nature of happiness, btw.)
7Nick_Tarleton
If you "really want to maximize" X, how is X not utility?
1bogdanb
I think the point Phil tries to make is the difference between “instantaneous utility”, that is a function on things at some point in time (actually, phase space), and the “general utility”, which is a function that also has time (or position in phase space) as an argument. While not immediately obvious, I think his naming choice could be worse. According to my non-scientific poll of one (me), when seeing the word “happiness” people think of time as a parameter instinctively, but consider specific instants for “utility” unless there are other cues in the context. A strict definition such as yours would require coining a few new words for the discussion. That's not a bad thing per se, I just can't think of any that have the advantage of being already used as such in general vocabulary.
1conchis
This is an area that is generally plagued with ambuiguities and inconsistent usage. - which makes it even more important to be clear what we mean. I think this will usually this will require the use of adjectives/modifiers, rather than attempting to define already ambiguous words in our own idiosyncratically-preferred ways. Instantaneous vs. life-time (or smaller life-slice) utility seems to make a clear distinction; decision-utility (i.e. the utility embodied in whatever function describes our decisions) vs. experienced utility (e.g. happiness or other psychological states) seem to make clear-ish distinctions. (Though if we care about non-experienced things, then maybe we need to further distinguish either of these from true-utility.) But using "utility" and "happiness" to distinguish between different degrees of time aggregation seems unnecessarily confusing to me.
0PhilGoetz
Yes, thanks; that's what I meant.
1jimrandomh
If we really wanted to maximize happiness, then we'd jump at the chance to wirehead ourselves. We don't, because happiness is only an indicator of what we desire, not the thing we desire itself. Making yourself happier using drugs is like making yourself wealthier by telling your bank to lie to you on account statements.
2thomblake
It seems as though you're equivocating over 'happiness'. You suggest that happiness is just an indicator, not the thing we desire itself. Your analogy suggests otherwise. Having your bank lie to you on your statements does not actually make you wealthier. Similarly, using drugs to feel pleasure doesn't actually make you happier. I prefer the latter usage.
2conchis
Actually, happiness is one of the things I desire; it's just not the only thing I desire. And drug induced happiness can be perfectly real, even if it's not necessarily the optimal way for me to achieve a positive emotional state all things considered. Making myself happier using drugs doesn't seem at all analogous to telling my bank to lie.
1Sideways
Countries that rely heavily on foreign aid risk becoming self-stabilizing systems in which increasing foreign aid to Hypothetistan reduces the incentives for Hypothetistanis to be productive, instead of providing capital they need to act on those incentives. This is by no means a complete explanation--I'm just explaining the analogy between self-stabilizing systems more explicitly.
0PhilGoetz
The specs for happiness require it to be self-stabilizing. Poverty can be self-stabilizing, but doesn't have to be.

I have a non-rhetorical question for you: do you actually think a significant fraction of people playing the lottery and taking crack cocaine actually maximize utility that way?

4PhilGoetz
Maybe. I think that if we see poor people systematically playing the lottery more often than well-off people do, differences in utility functions are at least as good an explanation as differences in intelligence. * If utility functions are sigmoidal, this in itself would predict poor people to play the lottery much more often than rich people. * The "poor people are stupid" explanation says that poor people are less likely to grasp how small the probability of winning is. I'm skeptical that IQ 100 or even IQ 115 people grasp such small numbers any better. * Crack use is high in neighborhoods where people are not just poor, but have a high probability of dying or ending up in prison. Look at the Sandtown Health Profile 2008. A person in Sandtown has a 1 in 6 chance of dying before reaching age 45. For males, it's higher. A man who "lives in Sandtown" is more likely to actually live in prison than in Sandtown. I didn't cherry-pick Sandtown; I chose it because my mom used to work at a day care there. A man living there, deciding whether to take up crack, has fewer expected years of good life lost than someone living in Fairfax. In general, when we see one group of people consistently engaging in higher levels of behavior that seems irrational to us, there's a good chance that something in their environment makes that behavior more rational for them than for us.
5Nominull
"Crack use is high in neighborhoods where people are not just poor, but have a high probability of dying or ending up in prison." Are you entirely certain you have the arrow of causality pointing in the right direction? This question is rhetorical.
1PhilGoetz
Sure, causality runs both ways. My point is that the idea that crack use is a rational decision predicts that crack use will be higher when the odds of dying or of spending much of your life in prison are higher. And that is what we see. It's a falsifiable test, and the idea passes the test.
2soreff
Are there studies of behavior changes for terminally ill people? That wouldn't probe changes in financial behavior - winning the lottery isn't useful to someone with pancreatic cancer. Do we see recreational drug use rise?
3Eliezer Yudkowsky
"Poor people are stupid" is a strawman, in this case. Human beings in general have trouble grasping low probabilities. Poor people just have further motivations that lead them to grasp harder at this straw.
2PhilGoetz
If we don't believe that the shape of their utility curves makes the lottery have a higher expected utility for poor people than for well-off people, then we are saying that poor people don't have any further motivations than rich people to grasp at this straw.
4conchis
It's possible (indeed, plausible) both that (a) poor people have these utility functions, and therefore more reason to play the lottery; and (b) it's still irrational for them to play the lottery.
3PhilGoetz
Yes. I'm not thinking of rationality as a line that people either cross or don't. If you say that rationality is maximizing your expected utility, then none of us are rational. If they have more reason to play the lottery than we at first thought, then they are more rational than we at first thought.
[-]Zvi70

If everything comes out exactly right, this can make a case for playing the lottery being better than doing nothing risky but it can't possibly make the case that the lottery isn't massively worse than other forms of gambling. Even if the numbers games are gone going to a casino offers the same opportunity at far better odds and allows you to choose the point on the curve where gambling stops being efficient. I do think however the point that negative-expectation risks can be rational is well taken.

A good reason for not playing the lottery is that you can get better odds by playing roulette, or using other forms of gambling. I am unimpressed by arguing against gambling in general because it's average dollar payoff is negative. That argument is ridiculous.

The discussion about lotteries that I presume lead to this thread was correct, though. It didn't talk about expected winnings, it talked about utility. There are cases where playing the lottery has a high utility - and if the utility is too low, then you shouldn't play.

4jimrandomh
Then the problem is with the argument, not the conclusion. A better argument against gambling is to observe what happens to gamblers, who generally end up broke.
4timtyler
That's habitual gamblers. Gambling is OK sometimes - for example, if it helps you to obtain your ferry fare home, thus saving you a long walk.
1PhilGoetz
Roulette doesn't give a big enough payoff to move you up into the steep area of your utility function, so it doesn't get the "a dollar won is worth more than a dollar spent" effect. It would also be a plausible conjecture that, if someone's utility function is sigmoid, and they're on the low end of it, their model of their utility function is an exponential. That would enhance the effect.
2Eliezer Yudkowsky
Just bet 5 times in a row. Still better odds than the lottery.
3PhilGoetz
True. Actually, I don't know if it's true. But it sounds plausible.
2AllanCrossman
One problem with that is that, if you've won at roulette a few times in a row, you're now going to be risking quite a lot if you bet it all again. You'll actually end up badly regretting your actions in a lot of cases.
2timtyler
Re: Roulette doesn't give a big enough payoff [...] Bet twice consecutively, then. Roulette's payoffs are flexible enough to give you practically any odds you desire. ...and what about the diminishing utility of money? Often you don't want to trade odds for cash.

I'm skeptical that lottery player utility is well modeled as in the convex section of a sigmoid. I'd want to see more analysis to that effect.

1PhilGoetz
Do you mean you're skeptical that there is diminishing marginal negative utility?
2soreff
I'm not convinced that it is a reasonable common regime to be in for utils(dollars). I think that it might be a reasonably common response to physical trauma: 1001 blows to the head are not as much worse than 1000 blows as the first blow was (particularly if the 100th was fatal...).
1RobinHanson
I mean I'm skeptical of increasing marginal utility of money.
1PhilGoetz
I take that as a yes. I don't have any data; I just have the reasoning that I've already presented, plus more of the same.

My attempt to liven up this post by talking about crack and lotteries has killed many minds here. If you're driven to write a long reply about crack and lotteries, perhaps you can spare one sentence in it to respond to this more general point:

We are inclined to use expected return when we should use expected utility.
This quick-and-dirty reasoning works well when we are reasoning, as we often are, about small changes in utility for ourselves or for other people in our same social class; because a line is a good local approximation to a curve. It works les... (read more)

7Eliezer Yudkowsky
A well-known point that goes back to Bernoulli and the very dawn of the expected utility formalism - except that conventionally this is illustrated by explaining why you should not buy lottery tickets that seem to have a positive expected return. Your main post is rather an attempt to defend behavior as "rational" which on the surface appears to be "irrational". This may make sense when you're looking at a hedge-fund trader who seemingly lost huge amounts of money through "stupid" Black Swan trades, and yet who is, in fact, living comfortably in a mansion based on prior payouts. The fact that he's living in a mansion gives us good reason to suspect that his actions are not so "stupid" as they seemed. The case for suspecting the hidden rationality of crack users is not so clear-cut. Is it really the case that before ever taking that first hit, the original potential drug user, looking over their entire futures with a clear eye free of such biases as the Peak-End Rule, would still choose the crack-user future? People in general are crazy. We are, for example, hyperbolic discounters. Sometimes the different behavior of "unusual" people stems not from any added stupidity, but from added motives given their situation. Crack users are not mutants. Their baseline level of happiness is lower, they are more desperate for change, their life expectancy is short; none of this is stupidity per se. But like all humans they are still hyperbolic discounters who will value short-term pleasure over the long-term consequences to their future self. To suppose that being in poverty they must also stop being hyperbolic discounters, so that their final decision is inhumanly flawless and we can praise their hidden rationality, is a failure mode that we might call Pretending To Be An Economist. Don't blame the readers, you killed your own post: humans in general are flawed beings, and buying lottery tickets is an illustration thereof. Trying to make it come out as an amazing counterintu
5CarlShulman
Screwing over your future selves because of hyperbolic discounting, or other people because of scope insensitivity, isn't obviously a failure of instrumental rationality except insofar as one is defecting in a Prisoner's Dilemma (which often isn't so) and rationality counts against that. Those 'biases' look essential to the shapes of our utility functions, to the extent that we have them.
5steven0461
Screwing over other people because of scope insensitivity is a failure of instrumental rationality if (and not only if) you also believe that the importance of someone's not being screwed over does not depend strongly on what happens to people unconnected to that person.
2CarlShulman
Steve, once people are made aware of larger scopes, they are less willing to pay the same amount of money to have effects with smaller scopes. See the references at this OB post.
0[anonymous]
How much less willing? Suppose A would give up only a million times more utility to save B and 10^100 other people than to save B. Would A, if informed of the existence of 10^100 people, really choose not to save B alone at the price of a cent? It seems to me that would have to be the case if scope insensitivity were to be rational. (This isn't my true objection, which I'm not sure how to verbalize at the moment.)
3Z_M_Davis
This issue deserves a main post. Cf. also Michael Wilson on "Normative reasoning: a Siren Song?"
2CarlShulman
Thanks for the link, although it's addressing related but different issues. A hyperbolic discounter can assent to 'locking in' a fixed mapping of times and discount factors in place of the indexical one. Then the future selves will agree about the relative value of stuff happening at different times, placing highest value on the period right after the lock-in.
2PhilGoetz
Just a nitpick: As Carl Shulman observed, this is not irrational. It's just a different discounting function than yours. Really? So you found a mistake in anything that I wrote? I must have missed it. All I see is you presenting just-so arguments along the lines of either "C causes people to play the lottery, therefor A cannot cause people to play the lottery", or "People are stupid; therefore they cannot be engaging in utility calculations when they play the lottery."
-1PhilGoetz
I'm skeptical that anyone has made this explanation, since lottery tickets never have a positive expected return. You can only mean an "explanation" for people who don't know how to multiply.
2Eliezer Yudkowsky
Would you STOP IT? For the love of Cthulhu! The classic explanation of expected utility vs. expected return deals with hypothetical lottery tickets that have an positive expected return but not positive expected utility.
1PhilGoetz
Okay. Sorry. What I meant was, "Since lotteries always have a negative expected return, I think that maybe the explanations you are talking about are directed at people who think that the lottery has an expected positive return because they don't do the math." Which you just answered. I was not familiar with this classic explanation.
[-]gjm40

Perhaps this just indicates that I lead too sheltered a life, but I think most people don't have $< utility function is concave just as it is for positive $. So I'm skeptical of the claim that "poor folks" playing the lottery or using crack are generally maximizing their expected utility.

And I can't speak for anyone else, but I don't think I've ever said anything like "those fools don't deserve our help if they're going to make such stupid decisions" about people playing the lottery or taking crack, and if I ever did I'm pretty sure it wouldn't be the desperately poor and/or miserable ones that I had in mind.

2PhilGoetz
So would I be, if I heard someone make that claim. I'll edit the post to clarify that I don't mean that. EDIT: Hmm. While I don't explicitly make that claim, I think it is possible that they are generally doing much better at it than we think they are. Nobody maximizes their expected utility.

I think this post is going to contribute to semantic confusion; when most of us talk about utilons, I think we're talking about the output of a utility function.

4gjm
I concur. I did a quick google for "utilons", and most of the hits I found were (1) from LW or OB, and (2) using "utilons" to mean exactly what Phil is saying it doesn't mean. I don't recall seeing "utilon" in (e.g.) philosophy or economics books with the meaning Phil prefers. Phil, where have you found "utilon" used to mean things like dollars?
1[anonymous]
Utilon isn't a standard word. I'll re-write the post not to use it. The definition of utilon is a tangential issue that I don't care about.
2Alicorn
I've heard "hedons" as units of pleasure ("dolors" for units of pain), although I suppose if we aren't being hedonists then it might be a misleading term.
2PhilGoetz
You may be right. I re-wrote the post not to use the word "utilon". The definition of utilon is a tangential issue.
2steven0461
I agree; "utilons" are units of utility, though "utils" is more standard.
4Paul Crowley
We should make a systematic effort to use standard terminology wherever possible on this site - we worry enough about being a cult without replacing standard terminology with our own.
1SarahNibs
I agree with the main point of the post, but I cannot recall having seen the word "utilons" used to refer to anything except either marginal utility or expected marginal utility, both of which are of course linear in expected marginal utility.

I don't think your examples are that plausible in the real world, at least not in terms of the reasoning you give. In your scenarios, it would be much better to hide the money away somewhere and let it accumulate, pretending to the world (and to Uncle Sam) that you spent it on crack or whatever, than to actually spend it on crack.

Having said that, if we determine the rationality of some behavior relative to the actual utility function of the individual, then we can see that for some (possible) utility functions, it would be rational to play the lottery and... (read more)

I like this post. That's a point I think needed to be made.

Before reading this, the way I saw it was that for quite a lot of people, there's something akin to a potential barrier as it exists in chemical reactions, for what they can expect of their life. Unless you can invest enough X (energy, time, money, etc.), then what you're trying to do won't work on average. You could also see it as an escape velocity, or the break-even point in a chain reaction getting critical.

To illustrate in the case of a lottery, many people can't expect to ever be able to get ... (read more)

To see a plot of the utility function, I posted one here:

http://audi-lesswrong.blogspot.com/

Doesn't this make some very big assumptions about the fixity of people's circumstances? If my life is so bad that smoking crack begins to seem rational, then surely, taking actual steps to improve my life would be more rational. Similarly, I imagine that the $5 spent on a lottery ticket could be better spent on something that was a positive first step toward improving even the worst of circumstances. Seems the only way this wouldn't be true would be if you simply assert, by fiat, that the person's circumstances are immutable, but I'm not sure whether this accords with reality. (One's politics are clearly implicated here.)

0loqi
I don't see how this automatically follows. If U < 0 for all mental states you inhabit except being high on crack, then you should do crack. There may be a discounting effect here, meaning you might want to avoid smoking crack until you have enough resources to smoke even more crack later. Your point seems to imply that "improving your life" would change your utility function, which doesn't really fly as a rational argument.
2AlexU
While I can imagine a situation where one's utility function would be as you described, it's a pretty contrived one, e.g., a destitute crack addict suffering from a painful terminal illness, where the second best choice would be suicide. More importantly, for the typical crack user -- the kind Phil Goetz was referencing -- there's almost always going to be something they could be spending the money on that would give them a higher expected utility over the long run ("bettering one's situation"). It's no small claim to say there isn't.
1loqi
While my example is a bit contrived, Phil said "some of them", not "most of them". I don't understand the typical crack user very well, but I can pretty easily conjecture a ruined mind requiring some quite high threshold of stimulation to enjoy itself. So let's weaken the example, and make them fixable. From there I'd say asserting that they should almost always be able to rationally derive a reliable method of repairing their broken state with higher expected return than smoking crack for the rest of their life is no small claim. But really, I have no idea what it's like to be them.

On a whim, I once played the lottery on the theory that the Many Worlds Interpretation is true, and some branch of me would win. I like to think he's out there somewhere.

(Of course, if MWI really is true, then some other me in some other branch would have played the lottery even if I hadn't, so strictly speaking I didn't even need to...)

1Eliezer Yudkowsky
Did you use a quantum random number generator? There's no law of physics stating that you make all possible decisions in different MWI branches, though sufficiently different people who otherwise resemble you might.
2SoullessAutomaton
All random number generators are quantum, just with very skewed probabilities. Maybe a lot of electrons will spontaneously be somewhere unlikely and cause my computer to miscompute the next term of a Mersenne Twister. This is a somewhat useless point, though...
1Vladimir_Nesov
In this sense, you don't need "random number generators" at all, just wait for your computer to spontaneously transform into a fire-breathing dragon.
1gjm
That's actually quite a good way of deciding when to buy lottery tickets and when not to.
1Paul Crowley
In that case I shall use a quantum random number generator to give myself a 10^-6 chance of playing the lottery :-)
0AllanCrossman
Alas no. I was thinking that, if bought sufficiently far in advance, quantum noise and chaos theory together would ensure that any ticket would win in some branch... (But yes, I see now that making the choice of ticket itself depend on quantum noise would have been better... hmm...)
-3Annoyance
In some paths, you used a quantum number generator to decide... in others, you didn't. In some paths, you conclude that you don't have to do anything because of Many Worlds, and so you simply stop doing. In others, you do not reach that conclusion. In still others, you actively reject it... and in some, you reach the conclusion but continue to do anyway. Even giving up because nothing means anything is meaningless.
0Z_M_Davis
I've done this too.

You haven't explained why relatively happy people play the lottery. The answer is that they can't understand how small the probability of winning is. (Nor can I, by the way; I only understand it mathematically. To make me understand, you could do something like phrase it in terms of coin flips.)

0PhilGoetz
Fortunately, I'm not obligated to explain everything. :)

"For someone with $ << 0, the marginal utility of $5 to them is minimal. "

I'm a newbie, which will soon be obvious, but I don't think the utility function is being applied correctly. At each value of U (the worth that a person has at his disposal in goods), we have the utility that can be purchased with U. (So u is negative for U<0 because you get negative things for owing money.)

I understand that if someone is greatly in debt, their utility may not change much if you increase or decrease their debt by some amount. This is why the utilit... (read more)

2conchis
I think there's some confusion here as to what the utility function is defined over. And to be fair, the post itself is somewhat confused in this respect. The argument that it might be more or less rational to gamble is an entirely different matter to whether it is more or less rational to smoke crack. The shape of the utility function over money can make it more or less rational to accept particular money gambles: risk aversion is after all a property of the shape of the utility function. The shape of the utility function over money cannot affect whether specific, non-risky choices about how to spend that money (e.g. whether to smoke crack) are more or less rational. If crack is you best option, that's already reflected in your utility function for money; if it's not, then that too, is already built in. NB: This comment is not as precise as it should be in distinguishing decision-utility, experienced-utility etc. I think the fundamental point is right though.

The real reason not to say "those fools don't deserve our help" is that it doesn't make sense for materialist consequentialists to weight utility based on who deserves what.

3Eliezer Yudkowsky
IAWYC but "consequentialism" of itself, or "materialism" of itself, doesn't stop us from having such a utility function.
1Paul Crowley
Do you know if this is a well-known position in consequentialist philosophy? It seems like it must be, but I only got as far as the Wikipedia page on deserts) and it seems to cover a discussion among deontologists,,,
4conchis
There's a fair amount of debate about what exactly the formalism of consequentialism excludes or doesn't, and whether it's possible to view deontological views (or indeed any other moral theory) as a subset of consequentialism. The idea that any moral view can be seen as a version of consequentlialism is often referred to as "Dreier's conjecture" (see e.g. the discussion here.) Usually, consequentialist aggregration functions impose an anonymity requirement, which seems to discourage desert as a consideration (it requires that the identity of individuals can't matter to what they get). But even that doesn't really exclude it.

Some people who buy lottery tickets argue that a lottery ticket is a small price to pay for the chance of being a millionaire.

While the expected return of the lottery ticket is negative, they place an extra value on the chance of being a millionaire, in addition to the expected return.

For comparison, suppose there is another lottery with the same negative expected return, but the maximum you can win is $5 (corresponding with a much higher probability of winning so that the expected value is the same). Then players will be less interested -- because you've ... (read more)

2Eliezer Yudkowsky
See Lotteries: A Waste of Hope.
1byrnema
I just voted myself down -- I'm realizing that the topic isn't "why do people play the lottery". Phil Goetz is presenting one potential reason for playing the lottery. I don't need to worry about how common that reason is; the topic at hand is to think about that reason and its relationship to rationality.
2PhilGoetz
I voted you back up, because it's a really interesting point. I take it you're saying that they get enjoyment out holding the ticket between the buying and the drawing.
1byrnema
Thank you. Actually, my point was this: there is value to a gamble that isn't measured by the expected value. The expected value argues that playing the lottery isn't going to make them rich. But keeping the dollar isn't going to make them rich either. At least spending their dollar playing the lottery gives them the chance of being rich. When I made this argument I was actually thinking of impossible gambles that are made on the scale of evolution, say. For every million that make a gamble and fail, (for example, to escape an island), eventually one wins and validates the gambles of all (survives the journey and populates the continent). I was reluctant to provide this example because I definitely don't want to imply that it's an evolutionary advantage or a justified sacrifice for the good of the group. (yuck) Perhaps an analogy from economics will balance -- you can demand more for an opportunity that can't be purchased any other way.
1PhilGoetz
There is at least one parallel in evolution. Many bacteria have heat shock proteins that inhibit DNA proofreading. That means that they respond to stress by increasing their mutation rate. It will probably kill them, but if the entire colony does it, it's more likely to survive. It's not quite the same. If you count the payoff to the bacteria to include the lives of all its descendants, then it may still be "rational". But maybe it is the same. Presumably, we instinctively act in a way that counts the utility of all our descendants in our utility functions.

I think your EDIT is much clearer, and more accurate than your original formulation.

In response to the (IMHO unnecessarily snarky, but perhaps I'm reading in too much) explanation for the edit:

It is possible simultaneously to (a) think that "some [lottery players] may be making much more rational decisions than we think"; (b) think that it's still irrational for them to play the lottery; and (c) not define "rational" as "the unattainable goal of perfect utility maximization."

This just means that you think playing the lottery is really silly.

[-][anonymous]00

Dollars in, utilons out. Otherwise what are dollars?

It may be perfectly rational for crabs in a bucket to pull each other down in an attempt to escape individually... from the perspective of a mere individual.

From a perspective of survival of the tribe, it's suicidal, and irrational to boot.

Crabs, of course, do not have tribes.

1PhilGoetz
What does this have to do with the post?
1janos
At least not when they're already in the bucket.
-1Annoyance
It's not clear that humans in a bucket (metaphorically speaking) care much about the survival of the tribe, either. There are no altruists in foxholes - not for very long, anyhow.