Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Just Lose Hope Already

44 Post author: Eliezer_Yudkowsky 25 February 2007 12:39AM

Casey Serin, a 24-year-old web programmer with no prior experience in real estate, owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy 8 different houses in different states.  He took cash out of the mortgage (applied for larger amounts than the price of the house) and spent the money on living expenses and real-estate seminars.  He was expecting the market to go up, it seems.

That's not even the sad part.  The sad part is that he still hasn't given up.  Casey Serin does not accept defeat.  He refuses to declare bankruptcy, or get a job; he still thinks he can make it big in real estate.  He went on spending money on seminars.  He tried to take out a mortgage on a 9th house.  He hasn't failed, you see, he's just had a learning experience.

That's what happens when you refuse to lose hope.

While this behavior may seem to be merely stupid, it also puts me in mind of two Nobel-Prize-winning economists...

...namely Merton and Scholes of Long-Term Capital Management.

While LTCM raked in giant profits over its first three years, in 1998 the inefficiences that LTCM were exploiting had started to vanish—other people knew about the trick, so it stopped working.

LTCM refused to lose hope.  Addicted to 40% annual returns, they borrowed more and more leverage to exploit tinier and tinier margins.  When everything started to go wrong for LTCM, they had equity of $4.72 billion, leverage of $124.5 billion, and derivative positions of $1.25 trillion.

Every profession has a different way to be smart—different skills to learn and rules to follow.  You might therefore think that the study of "rationality", as a general discipline, wouldn't have much to contribute to real-life success.  And yet it seems to me that how to not be stupid has a great deal in common across professions.  If you set out to teach someone how to not turn little mistakes into big mistakes, it's nearly the same art whether in hedge funds or romance, and one of the keys is this:  Be ready to admit you lost.

 

Part of the Letting Go subsequence of How To Actually Change Your Mind

Next post: "The Proper Use of Doubt"

Previous post: "The Crackpot Offer"

Comments (70)

Sort By: Old
Comment author: Kip_Werking 25 February 2007 01:49:31AM 8 points [-]

Excellent post. And very relevant, after Valentine's Day.

Comment author: Robin_Hanson2 25 February 2007 02:34:07AM 18 points [-]

This reminds me of so many stories whose explicit moral is "never give up." The hero keeps trying after everyone told him to quit, and in the end he succeeds, and the audience comes out of the theater reaffirming the value of hope. But, in real life, what a terrible thing to teach people.

Comment author: Atmosper 18 August 2009 07:17:35AM 29 points [-]

In conventional story structure, even though the hero never gives up, by the second turning point around 3/4 into the story, after having failed, he CHANGES STRATEGY, and succeeds. It's not the stories' fault if the audience doesn't get the message.

Comment author: Kip_Werking 25 February 2007 02:44:41AM 1 point [-]

Good point. Robin's comment, and Eliezer's post, reminds me of this excellent article at The Situationist:

http://thesituationist.wordpress.com/2007/02/20/dispositionist-situational-characters/

Comment author: Doug_S.2 25 February 2007 06:19:59AM 7 points [-]

"Never give up" is bad advice?

Probability of success if you continue: small. Probability of success if you give up: zero.

Small is better than zero, am I right?

On the other hand, this analysis only matters if the cost of failure is no worse than the cost of giving up. The "rational" thing to do would be to give up if and only if (probability of success * utility of success) + (probability of failure * utility of failure) < (utility of giving up).

There are a lot of things that one can achieve through sheer persistence, but there are others that, well, you can't do, period. The trick is to be able to tell the difference. I suspect that I'm not going to be a star athlete no matter how much I practice, but I just might qualify for the Pro Tour some day.

Comment author: akshatrathi 30 November 2009 12:12:21AM 4 points [-]

The point of this post was to show that persisting at something while being irrational can only cause harm. Of course, "Never give up" is not bad advice, but Eliezer's advice is be rational and accept defeat when you need to.

Comment author: DanielLC 29 December 2009 03:26:25AM 17 points [-]

You're ignoring the probability of succeeding at something else. If you're still doing this, it's zero. If you give up, it's not.

Of course, that can also be considered a cost of failure, in which case you didn't ignore it.

Comment author: pnrjulius 30 June 2012 03:48:48AM -1 points [-]

The trick is to be able to tell the difference.

And what a trick it is!

Comment author: Anne_Corwin 25 February 2007 06:29:25AM 2 points [-]

This behavior seems similar to that engaged in by gamblers who keep betting, despite heavy losses at the beginning of the night, figuring that if they stay in long enough they might be able to get their money back (and possibly more besides). In some respects, this behavior seems to be primarily motivated by the desire to have what you've already done "count for something". That is, the person is compelled to keep trying at whatever it is they're doing so as not to have to face the fact that they've wasted time and resources -- because if they "win" or succeed eventually, they can justify all prior attempts and failures as steps in the process toward eventual success.

I don't know what makes a person more or less likely to be vulnerable to engaging in that sort of behavior. But I think anyone wishing to avoid that sort of behavior would indeed do well to train themselves to learn to let go of a particular path or strategy. I'm guessing there's an element of "magical thinking" involved here, related to a fallacious sense that you accomplish things through repeating the same strategy over and over again (because eventually, "it's bound to work", when in fact, it isn't.)

Comment author: RBH 25 February 2007 06:42:06AM 6 points [-]

The best trading advice I ever read was from Martin Mayer:

The central difference between the market professional and the sheep in to be sheared is the professional's ability to take his losses.

That's been on my whiteboard since I started trading derivatives professionally 15 years ago.

Comment author: Joseph_Hertzlinger 25 February 2007 07:55:04AM 1 point [-]

It might make sense to ignore evidence that you are likely to fail if it is a competitive situation and the evidence comes from a rival who is likely to gain if you give up.

As far as Casey Serin was concerned, that didn't apply. The evidence came from a bank that stood to gain if he succeeded.

Comment author: Tom2 25 February 2007 01:54:29PM 11 points [-]

"If at first you fail, then try, then try again. After that, stop. There's no use being a damn fool about it."

Comment author: Eliezer_Yudkowsky 25 February 2007 06:05:50PM 12 points [-]

"Probability of success if you continue: small. Probability of success if you give up: zero."

Doug, that's exactly what people say to me when I challenge them on why they buy lottery tickets. "The chance of winning is tiny, but if I don't buy a ticket, the chance is zero."

I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability if we don't try is zero" turned out to be a good idea. Typically it is an excuse not to confront the flaws of a plan that is just plain unripe. You know what happens when you try a strategy with a tiny probability of success? It fails, that's what happens.

The Simpsons gave us the best advice: "Can't win, don't try."

Comment author: David_Gerard 11 January 2011 02:35:32PM 18 points [-]

I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability ifwe don't try is zero" turned out to be a good idea.

Er ... isn't that the argument for cryonics?

Comment author: shokwave 11 January 2011 03:08:35PM 7 points [-]

From four posts down:

...sign up for cryonics, something that has a much better than "small" chance of working.

That is, the chances of cryonics working is something like six or seven orders of magnitude better than winning the lottery.

Comment author: Vaniver 11 January 2011 04:17:45PM 10 points [-]

That is, the chances of cryonics working is something like six or seven orders of magnitude better than winning the lottery.

While that shows the lottery is stupid, it doesn't show that cryonics has made it into smart territory. Things are further complicated by the fact that your odds of winning the lottery are known, certain, and printed on the ticket- your odds of winning the cryonics lottery are fundamentally uncertain.

Comment author: shokwave 12 January 2011 03:19:59AM 2 points [-]

your odds of winning the cryonics lottery are fundamentally uncertain.

I disagree with 'fundamentally'. It is no more uncertain than any future event; to call all future events fundamentally uncertain could be true on certain acceptable definitions of fundamental, but it's hardly a useful word in those cases.

Medical research and testing has been done with cryonics; we have a good idea of exactly what kinds of damage occur during vitrification, and a middling idea of what would be required to fix it. IIRC cryonics institutions remaining in operation, the law not becoming hostile to cryonics, and possible civilization-damaging events (large-scale warfare, natural disasters, etc) are all bigger concerns than the medicine involved. All of these concerns can be quantified.

Comment author: Vaniver 12 January 2011 03:28:07AM 2 points [-]

It is no more uncertain than any future event;

I am talking about the odds, and even if I were talking about the event, I feel pretty strongly that we can be more certain about things like the sun rising tomorrow than me winning the lottery next week. My odds of winning the with each ticket I buy are 1 in X plus/minus some factor for fraud/finding a winning ticket. That's a pretty low range of odds. My odds for being revived after cryonics have a much wider range, since the events leading up to it are far more complicated than removing 6 balls from an urn. Hence, fundamental uncertainty because the fundamental aspects of the problem are different in a way that leads to less certainty.

Comment author: shokwave 12 January 2011 11:07:45AM 2 points [-]

Yes, they have a much wider range, but all mathematical treatments of that range that I've seen come out showing the lower limit to be at least a few orders of magnitude greater than the lottery. Even though we are uncertain about how likely cryonics is to work, we are certain it's more likely than winning the lottery.

Comment author: JohnH 21 April 2011 10:23:33PM 1 point [-]

Unless you discover a way of gaming the lottery system.

Comment author: pnrjulius 30 June 2012 03:50:12AM 0 points [-]

Though that's actually illegal, so you'd have to include the chance of getting caught.

Comment author: gwern 11 January 2011 05:06:21PM 3 points [-]

And scientific research!

Comment author: Vaniver 11 January 2011 05:34:22PM 3 points [-]

If you define success as "increased knowledge" instead of "new useful applications," then the probability of success for doing scientific research is high (i.e. >75%).

Comment author: Will_Sawin 12 January 2011 04:44:20AM 1 point [-]

For individual experiments, it is often low, depending on the field.

Comment author: wedrifid 12 January 2011 12:20:54PM *  2 points [-]

You increase your knowledge every time you do an experiment. Just as you do every time you ask a question in Guess Who? At the very worst you discover that you asked a stupid question or that your opponent gives unreliable answers.

Comment author: Will_Sawin 13 January 2011 12:58:39AM 1 point [-]

The relevant probability is p(benefits>costs) not p(benifits>0).

Comment author: wedrifid 13 January 2011 06:46:52AM 2 points [-]

Reading through the context confirms that the relevant probability is p(increased knowledge). I have no specified position on whether the knowledge gained is sufficient to justify the expenditure of effort.

Comment author: Will_Sawin 13 January 2011 01:16:38PM 2 points [-]

Indeed. I forgot. Oops.

Comment author: pnrjulius 30 June 2012 03:51:02AM -1 points [-]

Often it clearly isn't; so don't do that sort of research.

Don't spend $200 million trying to determine if there are a prime number of green rocks in Texas.

Comment author: Eliezer_Yudkowsky 12 January 2011 11:43:32AM 6 points [-]

No. If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work. Despite various what-ifs, ceteris paribus, adding carbon dioxide to the atmosphere would be expected to produce global warming and you would need specific evidence to contradict that. In the same way, ceteris paribus, vitrification at liquid nitrogen temperatures ought to preserve your brain and preserving your brain ought to preserve your you, and despite various what-ifs, you would need specific evidence to contradict that it because it is implied by the generalizations we already believe about the universe.

Comment author: wedrifid 12 January 2011 12:42:19PM 4 points [-]

Everything you say after the 'No." is true but doesn't support your contradiction of:

I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability ifwe don't try is zero" turned out to be a good idea.

Er ... isn't that the argument for cryonics?

There is no need to defend cryonics here. Just relax the generalisation. I'm surprised you 'can't think of a single case in your experience' anyway. It took me 10 seconds to think of three in mine. Hardly surprising - such cases turn up whenever the payoffs multiply out right.

Comment author: shokwave 12 January 2011 12:55:06PM 2 points [-]

I think the kind of small probabilities Eliezer was talking about (not that he was specific) here is small in the sense that there is a small probability that evolution is wrong, there is a small probability that God exists, etc.

The other interpretation is something like there is a small probability you will hit your open-ended straight draw (31%). If there are at least two players other than you calling, though, it is always a good idea to call (excepting tournament and all-in considerations). So it depends on what interpretation you have of the word 'small'.

By the first definition of small (vanishing), I can't think of a single argument that was a good idea. By the second, I can think of thousands. So, the generalisation is leaky because of that word 'small'. Instead of relaxing it, just tighten up the 'small' part.

Comment author: wedrifid 12 January 2011 01:46:52PM 1 point [-]

Redefinition not supported by the context.

Comment author: shokwave 12 January 2011 01:53:32PM 0 points [-]

I already noted that Eliezer was not specific enough to support that redefinition. I was offering an alternate course of action for Eliezer to take.

Comment author: wedrifid 13 January 2011 06:49:26AM 0 points [-]

That would certainly be a more reasonable position. (Except, obviously, where the payoffs were commensurably large. That obviously doesn't happen often. Situations like "3 weeks to live, can't afford cryonics are the only kind of exception that spring to mind.")

Comment author: Eliezer_Yudkowsky 12 January 2011 07:56:15PM 0 points [-]

Name one? We might be thinking of different generalizations here.

Comment author: wedrifid 13 January 2011 07:09:09AM 4 points [-]

We might be thinking of different generalizations here.

Almost certainly. I am specifically referring to the generalisation quoted by David. It is, in fact, exactly the reasoning I used when I donated to the SIAI. Specifically, I estimate the probability of me or even humanity surviving for the long term if we don't pull off FAI to be vanishingly small (like that of winning the lottery by mistake without buying a ticket) so donated to support FAI research even though I think it to be, well, "impossible".

More straightforward examples crop up all the time when playing games. Just last week I bid open misere when I had a 10% chance of winning - the alternatives of either passing or making a 9 call were guaranteed losses of the 500 game.

Comment author: soreff 07 November 2011 07:19:00PM *  2 points [-]

If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work.

Hmm. the "if humanity survives the next century" covers the uFAI possibility (where I suspect the bulk of the probability is). I'm taking it as a given that successful cryonics is possibly in principle (no vitalism etc.). Still, even conditional on no uFAI, there are still substantial probabilities that cryonics, as a practical matter of actually reviving patients, is likely to fail:

Technology may simply not be applied in that direction. The amount of specific research needed to actually revive patients may exceed the funding available.

Technology as a whole may stop progressing. We've had a lot of success in the last few decades in computing, less in energy, little in transportation, what looks much like saturation in pharmaceuticals - and the lithography advances which have been driving computing look like they have maybe another factor of two to go (unless we get atomically precise nanotechnology - which mostly hasn't been funded)

Perhaps there is a version of "coming to terms with one's mortality" which isn't deathist, and isn't theological, and isn't some vague displacements of one's hopes on to later generations, but is simply saying that hope of increasing one's lifespan by additional efforts isn't plausibly supported by the evidence, and the tradeoff of what one could instead do with that effort.

Comment author: soreff 10 November 2011 03:36:16PM *  2 points [-]

'scuse the self-follow-up...

One other thing that makes me skeptical about "cryonics should work by default":

A large chuck of what makes powerful parts of our society value (at least some) human life is their current inability to manufacture plug-compatible replacements for humans. Neither governments nor corporations can currently build taxpayers or employees. If these structures gained the ability to build human equivalents for the functions that they value, I'd expect that policies like requiring emergency rooms to admit people regardless of ability to pay to be dropped.

Successful revival of cryonics patients requires the ability to either repair or upload a frozen, rather damaged, brain. Either of these capabilities strongly suggests the ability to construct a healthy but blank brain or uploaded equivalent from scratch - but this is most of what is needed to create a plug-compatible replacement for a person (albeit requiring training - one time anyway, and then copying can be used...).

To put it another way: corporations and governments have capabilities beyond what individuals have, and they aren't known for using them humanely. They already are uFAIs, in a sense. Fortunately, for now, they are built of humans as component parts, so they currently can't dispense with us. If technology progresses to the point of being able to manufacture human equivalents, these structures will be free to evolve into full-blown uFAIs, presumably with lethal consequences.

If "by default" includes keeping something like our current social structure, with structures like corporations and governments present, I'd expect that for cryonics patients to be revived, our society would have to hit a very narrow window of technological capability. It would have to be capable of repairing or uploading frozen brains, but not capable of building plug-in human equivalents. This looks inherently improbable, rather than what I'd consider a default scenario.

Comment author: Tim_R 25 February 2007 06:37:58PM 0 points [-]

The cost of homes for sale is now on the decline. Buyers go out and negotiate.

Comment author: JohnH 21 April 2011 10:27:26PM *  1 point [-]

"The cost of homes for sale is now on the decline. Buyers go out and negotiate."

This is funny, because in some areas these many years later the prices are still in decline.

Comment author: Anne_Corwin 25 February 2007 10:16:12PM 1 point [-]

Eliezer: I can actually think of one case in which the argument "It has a small probability of success, but we should pursue it, because the probability if we don't try is zero".

Say someone is dying of a usually-fatal disease, and there's an experimental treatment available that has only a small probability of working. If the goal is to not have the person die, it makes more sense to try the experimental treatment than not try it, because if you don't try it, the person is going to die anyway.

Comment author: pnrjulius 30 June 2012 03:52:53AM -1 points [-]

Well, maybe. Depending on how much it costs to do that experimental treatment, compared to other things we could do with those resources.

(Actually a large part of the problem with rising medical costs in the developed world right now is precisely due to heavier use of extraordinary experimental treatments.)

Comment author: Anne_Corwin 25 February 2007 10:42:36PM 0 points [-]

Er, that should have been:

I can actually think of one case in which the argument "It has a small probability of success, but we should pursue it, because the probability if we don't try is zero" is potentially a reasonable one.

Comment author: Eliezer_Yudkowsky 25 February 2007 11:38:49PM 2 points [-]

Yeah, but Anne, I've never in real life encountered that situation.

Comment author: Eliezer_Yudkowsky 25 February 2007 11:40:58PM 2 points [-]

PS: What our putative terminal patient ought to do is sign up for cryonics, something that has a much better than "small" chance of working. And if the experimental treatment would get in the way of that, forget the experimental treatment. If people didn't cling to tiny hopes, they might see their large ones.

Comment author: JohnH 21 April 2011 10:32:49PM 3 points [-]

Problem with this is that if the experimental treatments never get tried then even if cryonics works they will still be terminally ill when defrosted. They should probably also make sure that there is some prize around for curing their specific illness with the hope that it will be easily solvable in the future, assuming the illness is relatively rare.

Comment author: JoshuaZ 22 April 2011 01:23:49AM 1 point [-]

Problem with this is that if the experimental treatments never get tried then even if cryonics works they will still be terminally ill when defrosted.

No one is going to remove someone from suspension if one can't cure whatever killed the person. That would be a waste of resources.

They should probably also make sure that there is some prize around for curing their specific illness with the hope that it will be easily solvable in the future, assuming the illness is relatively rare.

This is a really good idea.

Comment author: thomblake 08 May 2012 03:35:04PM 0 points [-]

Problem with this is that if the experimental treatments never get tried then even if cryonics works they will still be terminally ill when defrosted.

If reviving people works at all, it will probably be either uploading or rebuilding a human body from scratch. Reviving an ill person doesn't seem likely enough to be concerned about.

Comment author: Anne_Corwin 25 February 2007 11:45:51PM 0 points [-]

Eliezer: Agreed, though I'd probably classify cryonics as a kind of experimental treatment. And I think that in the case of any illness bound to destroy the brain (e.g., Alzheimer's), cryonics is, well, almost a no-brainer (no pun intended).

Comment author: Nigel_Swaby 26 February 2007 12:19:50AM 6 points [-]

Casey's problem isn't that he's still "trying." If I was in his situation, I'd keep trying as well. His future depends on it. His real problem is he isn't trying the right things. He's let properties go into foreclosure that he should have negotiated deeds in lieu for a long time ago. He should have declared bankruptcy a long time ago. He should have fixed up his properties a little bit at least to show better. He should have gotten a job as a source of income.

If you're interested, I actually interviewed Casey last year when this first happened. You can read the story here.

Comment author: gwern 20 January 2011 06:08:23PM 2 points [-]
Comment author: Rafe_Furst 26 February 2007 07:20:13AM 2 points [-]

Um, how come nobody is focusing on the fact that he LIED to get the mortgages? Surely that's the more grave mistake. Had he applied legally, he might not be in debt that he can't repay. He should be in jail for fraud, not lambasted by bloggers for his failure to admit defeat.

Comment author: Eliezer_Yudkowsky 26 February 2007 08:15:50AM 3 points [-]

Nigel, that's a good point. The skill of rationality is not to lose all hope, but to lose certain specific hopes under specific conditions.

Rafe, that's also an important point in how-not-to-be-stupid. Reality being intertwined, it is very hard to create a genuinely realistic deception. If you once tell a lie, the truth is ever after your enemy, and all truths entangled with that truth.

Comment author: Anders 26 February 2007 06:23:20PM 2 points [-]

We seem to have a disproportionate number of sayings and heuristics making us less impulsive and making our time horizons longer. That might have developed as a way of sustaining the long-term discounting we humans have in comparision to other animals; http://www.wjh.harvard.edu/~mnkylab/publications/animalcommunication/constraints.pdf has a nice diagram (figure 3) showing the difference between human (slowest), rats and pidgeons (fastest discounting). Slow discounting might be linked to our foraging lifestyle, http://www.wjh.harvard.edu/~mnkylab/publications/animalcommunication/constraints.pdf but since human societies have developed quickly recently the benefits of discounting have risen faster than evolution could have adapted (or impulsive individuals have a fitness advantage by having children earlier).

So maybe we have a culturally transmitted bias towards slow discounting and persistence, that normally counteracts our too fast discounting. But in some individuals it becomes maladaptive, perhaps because they already are naturally stubborn.

Comment author: David_J._Balan 26 February 2007 09:34:22PM 4 points [-]

There is a story for very young children called "The Carrot Seed" where a little boy plants a carrot seed and waters it and pulls out the weeds around it, and keeps on doing so even though everyone keeps telling him nothing will grow. At the end, of course, a giant carrot comes up. I've always had mixed feelings about reading that story. On the one hand, you don't want to send the message that things are true just because you believe, and that evidence to the contrary doesn't matter. On the other hand, you do want to innoculate the kid against excessive self-doubt and against taking too seriously people who, out of malice or out of instinctual aversion to different ideas, or because the idea of someone else succeeding is an implicit rebuke to them for not having tried, love to tell people what they can't do.

Comment author: Robin_Hanson2 26 February 2007 09:40:54PM 4 points [-]

I would be good to get data about how often real world suggestions to give up hope were good solid advice, and how often they came from malice or jealosy.

Comment author: zzz 27 February 2007 01:31:56AM 0 points [-]

Anders, there's no obvious connection between slow discounting and irrational persistence. Slow discounting is relevant when deciding whether to undertake a sunk investment (based on its ex-ante expected return), but once the investment has been incurred the sunk cost should be ignored as having no bearing on further decisions.

Comment author: Ben_Hill 27 February 2007 01:49:36PM 0 points [-]

RBH said: "It might make sense to ignore evidence that you are likely to fail if it is a competitive situation and the evidence comes from a rival who is likely to gain if you give up.

As far as Casey Serin was concerned, that didn't apply. The evidence came from a bank that stood to gain if he succeeded."

There may be a version of Casey out there that suceeded, because he gambled one year earlier than Casey. If Casey had pulled this off, he would have been considered a real estate genius.

Chance plays a large role.

Comment author: Luke_A_Somers 13 November 2011 03:28:42PM 1 point [-]

I'd bet that most of that version would have plowed it back into the same thing until he busted.

Comment author: Soem_Dood 13 May 2007 12:26:51PM -1 points [-]

RE: Casey Serin:

Some similarly business-minded folks from the old country also run into tough times, due to their own innovative ideas for creating wealth, just like Casey:

Uzbekistani immigrants await discussion of entrepreneurial methods

Comment author: Eliezer_Yudkowsky 28 November 2009 09:51:41PM 0 points [-]
Comment author: mat33 05 October 2011 03:42:37AM *  0 points [-]

Hm...

Is there some misterious, but great difference between getting -1000000$ and -100000000 in USA?

If there isn't such a thing, the wrong choices may have been made, but not by the Casey Serin. In fact, if we are speaking jury (12 honest tax payers), it may be rather smart idea to spend few millions USD on their, honest tax payers of his state, favorite charities at this point. To spend a few thousands on lawers and psichologists too.

PS. The politicians do spend a lot of money their countries don't actually own to delay the current Recession and make the next Great Depression out of it. And I do believe, they have rather good chances to succed at getting this Depression and getting away with it too.

Comment author: buybuydandavis 13 January 2012 04:50:11AM 13 points [-]

Casey seems perfectly rational to me.

If you're in the hole 2.2.mil, what is the harm to you of doubling down? You'll have to declare bankruptcy twice as loudly?

This point was actually driven home to me over 20 years ago when I interviewed for trading position. If your company is way in the hole, it may as well take what assets it has and make a leveraged bet. Either it gets out of the hole, or it's twice as broke, which given limited liability, really isn't any more of a problem for the corporate officers.

Comment author: pnrjulius 30 June 2012 03:55:24AM 0 points [-]

Makes sense from the corporation's perspective. But also kinda sounds like moral hazard to me.

Comment author: buybuydandavis 30 June 2012 07:52:48AM *  6 points [-]

Of course. That was the point. If you can make more bets than you can cover, and suffer no liability when you can't, you've got yourself a license to steal. And clearly the trader knew it.

Comment author: TraderJoe 08 May 2012 02:17:12PM *  0 points [-]

[comment deleted]

Comment author: pnrjulius 30 June 2012 03:47:38AM -1 points [-]

This is why I have decided not to be an entrepreneur. All the studies say that your odds are just not good enough to be worth it.

Comment author: themusicgod1 04 January 2013 06:40:56PM 0 points [-]

All the studies say that your odds are just not good enough to be worth it.

...and even if you are, people who are able to re-arrange the odds to their favour may end up crowding out the honest ones ;)

Comment author: themusicgod1 04 January 2013 06:33:07PM *  0 points [-]

Closely related: escallation of commitment While it's possible to not escalate commitment when you're in a losing situation, it is often our default tendency.