Casey Serin, a 24-year-old web programmer with no prior experience in real estate, owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy eight different houses in different states. He took cash out of the mortgage (applied for larger amounts than the price of the house) and spent the money on living expenses and real-estate seminars. He was expecting the market to go up, it seems.

    That’s not even the sad part. The sad part is that he still hasn’t given up. Casey Serin does not accept defeat. He refuses to declare bankruptcy, or get a job; he still thinks he can make it big in real estate. He went on spending money on seminars. He tried to take out a mortgage on a ninth house. He hasn’t failed, you see, he’s just had a learning experience.

    That’s what happens when you refuse to lose hope.

    While this behavior may seem to be merely stupid, it also puts me in mind of two Nobel-Prize-winning economists . . .

    . . . namely Merton and Scholes of Long-Term Capital Management.

    While LTCM raked in giant profits over its first three years, in 1998 the inefficiences that LTCM were exploiting had started to vanish—other people knew about the trick, so it stopped working.

    LTCM refused to lose hope. Addicted to 40% annual returns, they borrowed more and more leverage to exploit tinier and tinier margins. When everything started to go wrong for LTCM, they had equity of $4.72 billion, leverage of $124.5 billion, and derivative positions of $1.25 trillion.

    Every profession has a different way to be smart—different skills to learn and rules to follow. You might therefore think that the study of “rationality,” as a general discipline, wouldn’t have much to contribute to real-life success. And yet it seems to me that how to not be stupid has a great deal in common across professions. If you set out to teach someone how to not turn little mistakes into big mistakes, it’s nearly the same art whether in hedge funds or romance, and one of the keys is this: Be ready to admit you lost.

    New to LessWrong?

    New Comment
    78 comments, sorted by Click to highlight new comments since: Today at 7:21 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    Excellent post. And very relevant, after Valentine's Day.

    This reminds me of so many stories whose explicit moral is "never give up." The hero keeps trying after everyone told him to quit, and in the end he succeeds, and the audience comes out of the theater reaffirming the value of hope. But, in real life, what a terrible thing to teach people.

    In conventional story structure, even though the hero never gives up, by the second turning point around 3/4 into the story, after having failed, he CHANGES STRATEGY, and succeeds. It's not the stories' fault if the audience doesn't get the message.

    9orthonormal4y
    Sometimes. More often the hero just tries AGAIN, BUT HARDER.

    Good point. Robin's comment, and Eliezer's post, reminds me of this excellent article at The Situationist:

    http://thesituationist.wordpress.com/2007/02/20/dispositionist-situational-characters/

    "Never give up" is bad advice?

    Probability of success if you continue: small. Probability of success if you give up: zero.

    Small is better than zero, am I right?

    On the other hand, this analysis only matters if the cost of failure is no worse than the cost of giving up. The "rational" thing to do would be to give up if and only if (probability of success utility of success) + (probability of failure utility of failure) < (utility of giving up).

    There are a lot of things that one can achieve through sheer persistence, but there are othe... (read more)

    6akshatrathi14y
    The point of this post was to show that persisting at something while being irrational can only cause harm. Of course, "Never give up" is not bad advice, but Eliezer's advice is be rational and accept defeat when you need to.

    You're ignoring the probability of succeeding at something else. If you're still doing this, it's zero. If you give up, it's not.

    Of course, that can also be considered a cost of failure, in which case you didn't ignore it.

    Edit: This is equivalent to counting opportunity cost as a cost of failure that's not a cost of giving up, so maybe you weren't ignoring it.

    1omalleyt8y
    In addition, as Eliezer's earlier post about the math proof shows, if the original reason that led you to believe you could do something was shown to be false, you should almost certainly give up. It's very unlikely you were right for the wrong reasons. If, knowing what you know now, you would never have tried, then you should probably stop.
    1Jiro7y
    This ignores the case where your "original reason" was an attempt to formalize some informal reason. If your error is in the formalization process and not in the reason itself, being right for the wrong reason is a plausible scenario.
    1pnrjulius12y
    And what a trick it is!

    This behavior seems similar to that engaged in by gamblers who keep betting, despite heavy losses at the beginning of the night, figuring that if they stay in long enough they might be able to get their money back (and possibly more besides). In some respects, this behavior seems to be primarily motivated by the desire to have what you've already done "count for something". That is, the person is compelled to keep trying at whatever it is they're doing so as not to have to face the fact that they've wasted time and resources -- because if they ... (read more)

    The best trading advice I ever read was from Martin Mayer:

    The central difference between the market professional and the sheep in to be sheared is the professional's ability to take his losses.
    That's been on my whiteboard since I started trading derivatives professionally 15 years ago.

    It might make sense to ignore evidence that you are likely to fail if it is a competitive situation and the evidence comes from a rival who is likely to gain if you give up.

    As far as Casey Serin was concerned, that didn't apply. The evidence came from a bank that stood to gain if he succeeded.

    "If at first you fail, then try, then try again. After that, stop. There's no use being a damn fool about it."

    "Probability of success if you continue: small. Probability of success if you give up: zero."

    Doug, that's exactly what people say to me when I challenge them on why they buy lottery tickets. "The chance of winning is tiny, but if I don't buy a ticket, the chance is zero."

    I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability if we don't try is zero" turned out to be a good idea. Typically it is an excuse not to confront the flaws of a plan that is just plain unripe. You know what happens when you try a strategy with a tiny probability of success? It fails, that's what happens.

    The Simpsons gave us the best advice: "Can't win, don't try."

    I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability ifwe don't try is zero" turned out to be a good idea.

    Er ... isn't that the argument for cryonics?

    From four posts down:

    ...sign up for cryonics, something that has a much better than "small" chance of working.

    That is, the chances of cryonics working is something like six or seven orders of magnitude better than winning the lottery.

    That is, the chances of cryonics working is something like six or seven orders of magnitude better than winning the lottery.

    While that shows the lottery is stupid, it doesn't show that cryonics has made it into smart territory. Things are further complicated by the fact that your odds of winning the lottery are known, certain, and printed on the ticket- your odds of winning the cryonics lottery are fundamentally uncertain.

    8shokwave13y
    I disagree with 'fundamentally'. It is no more uncertain than any future event; to call all future events fundamentally uncertain could be true on certain acceptable definitions of fundamental, but it's hardly a useful word in those cases. Medical research and testing has been done with cryonics; we have a good idea of exactly what kinds of damage occur during vitrification, and a middling idea of what would be required to fix it. IIRC cryonics institutions remaining in operation, the law not becoming hostile to cryonics, and possible civilization-damaging events (large-scale warfare, natural disasters, etc) are all bigger concerns than the medicine involved. All of these concerns can be quantified.
    3Vaniver13y
    I am talking about the odds, and even if I were talking about the event, I feel pretty strongly that we can be more certain about things like the sun rising tomorrow than me winning the lottery next week. My odds of winning the with each ticket I buy are 1 in X plus/minus some factor for fraud/finding a winning ticket. That's a pretty low range of odds. My odds for being revived after cryonics have a much wider range, since the events leading up to it are far more complicated than removing 6 balls from an urn. Hence, fundamental uncertainty because the fundamental aspects of the problem are different in a way that leads to less certainty.
    7shokwave13y
    Yes, they have a much wider range, but all mathematical treatments of that range that I've seen come out showing the lower limit to be at least a few orders of magnitude greater than the lottery. Even though we are uncertain about how likely cryonics is to work, we are certain it's more likely than winning the lottery.
    0JohnH13y
    Unless you discover a way of gaming the lottery system.
    2pnrjulius12y
    Though that's actually illegal, so you'd have to include the chance of getting caught.
    6gwern13y
    And scientific research!
    5Vaniver13y
    If you define success as "increased knowledge" instead of "new useful applications," then the probability of success for doing scientific research is high (i.e. >75%).
    4Will_Sawin13y
    For individual experiments, it is often low, depending on the field.
    5wedrifid13y
    You increase your knowledge every time you do an experiment. Just as you do every time you ask a question in Guess Who? At the very worst you discover that you asked a stupid question or that your opponent gives unreliable answers.
    2Will_Sawin13y
    The relevant probability is p(benefits>costs) not p(benifits>0).
    6wedrifid13y
    Reading through the context confirms that the relevant probability is p(increased knowledge). I have no specified position on whether the knowledge gained is sufficient to justify the expenditure of effort.
    4Will_Sawin13y
    Indeed. I forgot. Oops.
    0pnrjulius12y
    Often it clearly isn't; so don't do that sort of research. Don't spend $200 million trying to determine if there are a prime number of green rocks in Texas.
    8Eliezer Yudkowsky13y
    No. If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work. Despite various what-ifs, ceteris paribus, adding carbon dioxide to the atmosphere would be expected to produce global warming and you would need specific evidence to contradict that. In the same way, ceteris paribus, vitrification at liquid nitrogen temperatures ought to preserve your brain and preserving your brain ought to preserve your you, and despite various what-ifs, you would need specific evidence to contradict that it because it is implied by the generalizations we already believe about the universe.

    Everything you say after the 'No." is true but doesn't support your contradiction of:

    I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability ifwe don't try is zero" turned out to be a good idea.

    Er ... isn't that the argument for cryonics?

    There is no need to defend cryonics here. Just relax the generalisation. I'm surprised you 'can't think of a single case in your experience' anyway. It took me 10 seconds to think of three in mine. Hardly surprising - such cases turn up whenever the payoffs multiply out right.

    5shokwave13y
    I think the kind of small probabilities Eliezer was talking about (not that he was specific) here is small in the sense that there is a small probability that evolution is wrong, there is a small probability that God exists, etc. The other interpretation is something like there is a small probability you will hit your open-ended straight draw (31%). If there are at least two players other than you calling, though, it is always a good idea to call (excepting tournament and all-in considerations). So it depends on what interpretation you have of the word 'small'. By the first definition of small (vanishing), I can't think of a single argument that was a good idea. By the second, I can think of thousands. So, the generalisation is leaky because of that word 'small'. Instead of relaxing it, just tighten up the 'small' part.
    4wedrifid13y
    Redefinition not supported by the context.
    0shokwave13y
    I already noted that Eliezer was not specific enough to support that redefinition. I was offering an alternate course of action for Eliezer to take.
    2wedrifid13y
    That would certainly be a more reasonable position. (Except, obviously, where the payoffs were commensurably large. That obviously doesn't happen often. Situations like "3 weeks to live, can't afford cryonics are the only kind of exception that spring to mind.")
    1Eliezer Yudkowsky13y
    Name one? We might be thinking of different generalizations here.

    We might be thinking of different generalizations here.

    Almost certainly. I am specifically referring to the generalisation quoted by David. It is, in fact, exactly the reasoning I used when I donated to the SIAI. Specifically, I estimate the probability of me or even humanity surviving for the long term if we don't pull off FAI to be vanishingly small (like that of winning the lottery by mistake without buying a ticket) so donated to support FAI research even though I think it to be, well, "impossible".

    More straightforward examples crop up all the time when playing games. Just last week I bid open misere when I had a 10% chance of winning - the alternatives of either passing or making a 9 call were guaranteed losses of the 500 game.

    4soreff12y
    Hmm. the "if humanity survives the next century" covers the uFAI possibility (where I suspect the bulk of the probability is). I'm taking it as a given that successful cryonics is possibly in principle (no vitalism etc.). Still, even conditional on no uFAI, there are still substantial probabilities that cryonics, as a practical matter of actually reviving patients, is likely to fail: Technology may simply not be applied in that direction. The amount of specific research needed to actually revive patients may exceed the funding available. Technology as a whole may stop progressing. We've had a lot of success in the last few decades in computing, less in energy, little in transportation, what looks much like saturation in pharmaceuticals - and the lithography advances which have been driving computing look like they have maybe another factor of two to go (unless we get atomically precise nanotechnology - which mostly hasn't been funded) Perhaps there is a version of "coming to terms with one's mortality" which isn't deathist, and isn't theological, and isn't some vague displacements of one's hopes on to later generations, but is simply saying that hope of increasing one's lifespan by additional efforts isn't plausibly supported by the evidence, and the tradeoff of what one could instead do with that effort.
    5soreff12y
    'scuse the self-follow-up... One other thing that makes me skeptical about "cryonics should work by default": A large chuck of what makes powerful parts of our society value (at least some) human life is their current inability to manufacture plug-compatible replacements for humans. Neither governments nor corporations can currently build taxpayers or employees. If these structures gained the ability to build human equivalents for the functions that they value, I'd expect that policies like requiring emergency rooms to admit people regardless of ability to pay to be dropped. Successful revival of cryonics patients requires the ability to either repair or upload a frozen, rather damaged, brain. Either of these capabilities strongly suggests the ability to construct a healthy but blank brain or uploaded equivalent from scratch - but this is most of what is needed to create a plug-compatible replacement for a person (albeit requiring training - one time anyway, and then copying can be used...). To put it another way: corporations and governments have capabilities beyond what individuals have, and they aren't known for using them humanely. They already are uFAIs, in a sense. Fortunately, for now, they are built of humans as component parts, so they currently can't dispense with us. If technology progresses to the point of being able to manufacture human equivalents, these structures will be free to evolve into full-blown uFAIs, presumably with lethal consequences. If "by default" includes keeping something like our current social structure, with structures like corporations and governments present, I'd expect that for cryonics patients to be revived, our society would have to hit a very narrow window of technological capability. It would have to be capable of repairing or uploading frozen brains, but not capable of building plug-in human equivalents. This looks inherently improbable, rather than what I'd consider a default scenario.

    The cost of homes for sale is now on the decline. Buyers go out and negotiate.

    3JohnH13y
    "The cost of homes for sale is now on the decline. Buyers go out and negotiate." This is funny, because in some areas these many years later the prices are still in decline.

    Eliezer: I can actually think of one case in which the argument "It has a small probability of success, but we should pursue it, because the probability if we don't try is zero".

    Say someone is dying of a usually-fatal disease, and there's an experimental treatment available that has only a small probability of working. If the goal is to not have the person die, it makes more sense to try the experimental treatment than not try it, because if you don't try it, the person is going to die anyway.

    5pnrjulius12y
    Well, maybe. Depending on how much it costs to do that experimental treatment, compared to other things we could do with those resources. (Actually a large part of the problem with rising medical costs in the developed world right now is precisely due to heavier use of extraordinary experimental treatments.)

    Er, that should have been:

    I can actually think of one case in which the argument "It has a small probability of success, but we should pursue it, because the probability if we don't try is zero" is potentially a reasonable one.

    Yeah, but Anne, I've never in real life encountered that situation.

    PS: What our putative terminal patient ought to do is sign up for cryonics, something that has a much better than "small" chance of working. And if the experimental treatment would get in the way of that, forget the experimental treatment. If people didn't cling to tiny hopes, they might see their large ones.

    7JohnH13y
    Problem with this is that if the experimental treatments never get tried then even if cryonics works they will still be terminally ill when defrosted. They should probably also make sure that there is some prize around for curing their specific illness with the hope that it will be easily solvable in the future, assuming the illness is relatively rare.
    2JoshuaZ13y
    No one is going to remove someone from suspension if one can't cure whatever killed the person. That would be a waste of resources. This is a really good idea.
    1thomblake12y
    If reviving people works at all, it will probably be either uploading or rebuilding a human body from scratch. Reviving an ill person doesn't seem likely enough to be concerned about.

    Eliezer: Agreed, though I'd probably classify cryonics as a kind of experimental treatment. And I think that in the case of any illness bound to destroy the brain (e.g., Alzheimer's), cryonics is, well, almost a no-brainer (no pun intended).

    Casey's problem isn't that he's still "trying." If I was in his situation, I'd keep trying as well. His future depends on it. His real problem is he isn't trying the right things. He's let properties go into foreclosure that he should have negotiated deeds in lieu for a long time ago. He should have declared bankruptcy a long time ago. He should have fixed up his properties a little bit at least to show better. He should have gotten a job as a source of income.

    If you're interested, I actually interviewed Casey last year when this first happened. You can read the story here.

    7gwern13y
    Working link: http://web.archive.org/web/20080217121918/http://slcrealestate.blogspot.com/2006/10/casey-serins-story-american-dream-part.html

    Um, how come nobody is focusing on the fact that he LIED to get the mortgages? Surely that's the more grave mistake. Had he applied legally, he might not be in debt that he can't repay. He should be in jail for fraud, not lambasted by bloggers for his failure to admit defeat.

    Nigel, that's a good point. The skill of rationality is not to lose all hope, but to lose certain specific hopes under specific conditions.

    Rafe, that's also an important point in how-not-to-be-stupid. Reality being intertwined, it is very hard to create a genuinely realistic deception. If you once tell a lie, the truth is ever after your enemy, and all truths entangled with that truth.

    We seem to have a disproportionate number of sayings and heuristics making us less impulsive and making our time horizons longer. That might have developed as a way of sustaining the long-term discounting we humans have in comparision to other animals; http://www.wjh.harvard.edu/~mnkylab/publications/animalcommunication/constraints.pdf has a nice diagram (figure 3) showing the difference between human (slowest), rats and pidgeons (fastest discounting). Slow discounting might be linked to our foraging lifestyle, http://www.wjh.harvard.edu/~mnkylab/publicati... (read more)

    There is a story for very young children called "The Carrot Seed" where a little boy plants a carrot seed and waters it and pulls out the weeds around it, and keeps on doing so even though everyone keeps telling him nothing will grow. At the end, of course, a giant carrot comes up. I've always had mixed feelings about reading that story. On the one hand, you don't want to send the message that things are true just because you believe, and that evidence to the contrary doesn't matter. On the other hand, you do want to innoculate the kid agains... (read more)

    I would be good to get data about how often real world suggestions to give up hope were good solid advice, and how often they came from malice or jealosy.

    Anders, there's no obvious connection between slow discounting and irrational persistence. Slow discounting is relevant when deciding whether to undertake a sunk investment (based on its ex-ante expected return), but once the investment has been incurred the sunk cost should be ignored as having no bearing on further decisions.

    RBH said: "It might make sense to ignore evidence that you are likely to fail if it is a competitive situation and the evidence comes from a rival who is likely to gain if you give up.

    As far as Casey Serin was concerned, that didn't apply. The evidence came from a bank that stood to gain if he succeeded."

    There may be a version of Casey out there that suceeded, because he gambled one year earlier than Casey. If Casey had pulled this off, he would have been considered a real estate genius.

    Chance plays a large role.

    3Luke_A_Somers12y
    I'd bet that most of that version would have plowed it back into the same thing until he busted.

    RE: Casey Serin:

    Some similarly business-minded folks from the old country also run into tough times, due to their own innovative ideas for creating wealth, just like Casey:

    Uzbekistani immigrants await discussion of entrepreneurial methods

    Hm...

    Is there some misterious, but great difference between getting -1000000$ and -100000000 in USA?

    If there isn't such a thing, the wrong choices may have been made, but not by the Casey Serin. In fact, if we are speaking jury (12 honest tax payers), it may be rather smart idea to spend few millions USD on their, honest tax payers of his state, favorite charities at this point. To spend a few thousands on lawers and psichologists too.

    PS. The politicians do spend a lot of money their countries don't actually own to delay the current Recession and make the next Great Depression out of it. And I do believe, they have rather good chances to succed at getting this Depression and getting away with it too.

    Casey seems perfectly rational to me.

    If you're in the hole 2.2.mil, what is the harm to you of doubling down? You'll have to declare bankruptcy twice as loudly?

    This point was actually driven home to me over 20 years ago when I interviewed for trading position. If your company is way in the hole, it may as well take what assets it has and make a leveraged bet. Either it gets out of the hole, or it's twice as broke, which given limited liability, really isn't any more of a problem for the corporate officers.

    2pnrjulius12y
    Makes sense from the corporation's perspective. But also kinda sounds like moral hazard to me.

    Of course. That was the point. If you can make more bets than you can cover, and suffer no liability when you can't, you've got yourself a license to steal. And clearly the trader knew it.

    [comment deleted]

    [This comment is no longer endorsed by its author]Reply

    This is why I have decided not to be an entrepreneur. All the studies say that your odds are just not good enough to be worth it.

    0themusicgod111y
    ...and even if you are, people who are able to re-arrange the odds to their favour may end up crowding out the honest ones ;)
    1tlhonmey2y
    The odds are long because all the obviously good ideas with no risk of failure are immediately snapped up by everyone. The key is to learn to spot those so you can move on them first, and also to keep a sane estimate with how much you're gambling vs the potential reward so that your net expected payout remains positive.

    Closely related: escallation of commitment While it's possible to not escalate commitment when you're in a losing situation, it is often our default tendency.

    0[anonymous]9y
    Quite obviously that's factually incorrect!
    0Document9y
    I guess technically it's "too late" to give up on a dream if you've already accomplished it; but I'm not sure that's how most people would read the statement.

    So I think it depends. If the probability stays at the same rate after trying each time, then you should quit. Like lottery ticket example that Eliezer gave is an example to that. But if there is an improvement, even if it's slight, then maybe keep trying it? It may be tricky, because every time you can say "but there is still a chance right" ? If you plot the events on a model like Markov chain, then it could be easier to be rational.