David_Gerard comments on Just Lose Hope Already - Less Wrong

47 Post author: Eliezer_Yudkowsky 25 February 2007 12:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (76)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 25 February 2007 06:05:50PM 15 points [-]

"Probability of success if you continue: small. Probability of success if you give up: zero."

Doug, that's exactly what people say to me when I challenge them on why they buy lottery tickets. "The chance of winning is tiny, but if I don't buy a ticket, the chance is zero."

I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability if we don't try is zero" turned out to be a good idea. Typically it is an excuse not to confront the flaws of a plan that is just plain unripe. You know what happens when you try a strategy with a tiny probability of success? It fails, that's what happens.

The Simpsons gave us the best advice: "Can't win, don't try."

Comment author: David_Gerard 11 January 2011 02:35:32PM 22 points [-]

I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability ifwe don't try is zero" turned out to be a good idea.

Er ... isn't that the argument for cryonics?

Comment author: shokwave 11 January 2011 03:08:35PM 9 points [-]

From four posts down:

...sign up for cryonics, something that has a much better than "small" chance of working.

That is, the chances of cryonics working is something like six or seven orders of magnitude better than winning the lottery.

Comment author: Vaniver 11 January 2011 04:17:45PM 12 points [-]

That is, the chances of cryonics working is something like six or seven orders of magnitude better than winning the lottery.

While that shows the lottery is stupid, it doesn't show that cryonics has made it into smart territory. Things are further complicated by the fact that your odds of winning the lottery are known, certain, and printed on the ticket- your odds of winning the cryonics lottery are fundamentally uncertain.

Comment author: shokwave 12 January 2011 03:19:59AM 3 points [-]

your odds of winning the cryonics lottery are fundamentally uncertain.

I disagree with 'fundamentally'. It is no more uncertain than any future event; to call all future events fundamentally uncertain could be true on certain acceptable definitions of fundamental, but it's hardly a useful word in those cases.

Medical research and testing has been done with cryonics; we have a good idea of exactly what kinds of damage occur during vitrification, and a middling idea of what would be required to fix it. IIRC cryonics institutions remaining in operation, the law not becoming hostile to cryonics, and possible civilization-damaging events (large-scale warfare, natural disasters, etc) are all bigger concerns than the medicine involved. All of these concerns can be quantified.

Comment author: Vaniver 12 January 2011 03:28:07AM 3 points [-]

It is no more uncertain than any future event;

I am talking about the odds, and even if I were talking about the event, I feel pretty strongly that we can be more certain about things like the sun rising tomorrow than me winning the lottery next week. My odds of winning the with each ticket I buy are 1 in X plus/minus some factor for fraud/finding a winning ticket. That's a pretty low range of odds. My odds for being revived after cryonics have a much wider range, since the events leading up to it are far more complicated than removing 6 balls from an urn. Hence, fundamental uncertainty because the fundamental aspects of the problem are different in a way that leads to less certainty.

Comment author: shokwave 12 January 2011 11:07:45AM 3 points [-]

Yes, they have a much wider range, but all mathematical treatments of that range that I've seen come out showing the lower limit to be at least a few orders of magnitude greater than the lottery. Even though we are uncertain about how likely cryonics is to work, we are certain it's more likely than winning the lottery.

Comment author: JohnH 21 April 2011 10:23:33PM 1 point [-]

Unless you discover a way of gaming the lottery system.

Comment author: pnrjulius 30 June 2012 03:50:12AM 0 points [-]

Though that's actually illegal, so you'd have to include the chance of getting caught.

Comment author: gwern 11 January 2011 05:06:21PM 3 points [-]

And scientific research!

Comment author: Vaniver 11 January 2011 05:34:22PM 3 points [-]

If you define success as "increased knowledge" instead of "new useful applications," then the probability of success for doing scientific research is high (i.e. >75%).

Comment author: Will_Sawin 12 January 2011 04:44:20AM 2 points [-]

For individual experiments, it is often low, depending on the field.

Comment author: wedrifid 12 January 2011 12:20:54PM *  2 points [-]

You increase your knowledge every time you do an experiment. Just as you do every time you ask a question in Guess Who? At the very worst you discover that you asked a stupid question or that your opponent gives unreliable answers.

Comment author: Will_Sawin 13 January 2011 12:58:39AM 1 point [-]

The relevant probability is p(benefits>costs) not p(benifits>0).

Comment author: wedrifid 13 January 2011 06:46:52AM 3 points [-]

Reading through the context confirms that the relevant probability is p(increased knowledge). I have no specified position on whether the knowledge gained is sufficient to justify the expenditure of effort.

Comment author: Will_Sawin 13 January 2011 01:16:38PM 2 points [-]

Indeed. I forgot. Oops.

Comment author: pnrjulius 30 June 2012 03:51:02AM -1 points [-]

Often it clearly isn't; so don't do that sort of research.

Don't spend $200 million trying to determine if there are a prime number of green rocks in Texas.

Comment author: Eliezer_Yudkowsky 12 January 2011 11:43:32AM 6 points [-]

No. If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work. Despite various what-ifs, ceteris paribus, adding carbon dioxide to the atmosphere would be expected to produce global warming and you would need specific evidence to contradict that. In the same way, ceteris paribus, vitrification at liquid nitrogen temperatures ought to preserve your brain and preserving your brain ought to preserve your you, and despite various what-ifs, you would need specific evidence to contradict that it because it is implied by the generalizations we already believe about the universe.

Comment author: wedrifid 12 January 2011 12:42:19PM 7 points [-]

Everything you say after the 'No." is true but doesn't support your contradiction of:

I can't think of one single case in my experience when the argument "It has a small probability of success, but we should pursue it, because the probability ifwe don't try is zero" turned out to be a good idea.

Er ... isn't that the argument for cryonics?

There is no need to defend cryonics here. Just relax the generalisation. I'm surprised you 'can't think of a single case in your experience' anyway. It took me 10 seconds to think of three in mine. Hardly surprising - such cases turn up whenever the payoffs multiply out right.

Comment author: shokwave 12 January 2011 12:55:06PM 3 points [-]

I think the kind of small probabilities Eliezer was talking about (not that he was specific) here is small in the sense that there is a small probability that evolution is wrong, there is a small probability that God exists, etc.

The other interpretation is something like there is a small probability you will hit your open-ended straight draw (31%). If there are at least two players other than you calling, though, it is always a good idea to call (excepting tournament and all-in considerations). So it depends on what interpretation you have of the word 'small'.

By the first definition of small (vanishing), I can't think of a single argument that was a good idea. By the second, I can think of thousands. So, the generalisation is leaky because of that word 'small'. Instead of relaxing it, just tighten up the 'small' part.

Comment author: wedrifid 12 January 2011 01:46:52PM 2 points [-]

Redefinition not supported by the context.

Comment author: shokwave 12 January 2011 01:53:32PM 0 points [-]

I already noted that Eliezer was not specific enough to support that redefinition. I was offering an alternate course of action for Eliezer to take.

Comment author: wedrifid 13 January 2011 06:49:26AM 1 point [-]

That would certainly be a more reasonable position. (Except, obviously, where the payoffs were commensurably large. That obviously doesn't happen often. Situations like "3 weeks to live, can't afford cryonics are the only kind of exception that spring to mind.")

Comment author: Eliezer_Yudkowsky 12 January 2011 07:56:15PM 1 point [-]

Name one? We might be thinking of different generalizations here.

Comment author: wedrifid 13 January 2011 07:09:09AM 6 points [-]

We might be thinking of different generalizations here.

Almost certainly. I am specifically referring to the generalisation quoted by David. It is, in fact, exactly the reasoning I used when I donated to the SIAI. Specifically, I estimate the probability of me or even humanity surviving for the long term if we don't pull off FAI to be vanishingly small (like that of winning the lottery by mistake without buying a ticket) so donated to support FAI research even though I think it to be, well, "impossible".

More straightforward examples crop up all the time when playing games. Just last week I bid open misere when I had a 10% chance of winning - the alternatives of either passing or making a 9 call were guaranteed losses of the 500 game.

Comment author: soreff 07 November 2011 07:19:00PM *  3 points [-]

If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work.

Hmm. the "if humanity survives the next century" covers the uFAI possibility (where I suspect the bulk of the probability is). I'm taking it as a given that successful cryonics is possibly in principle (no vitalism etc.). Still, even conditional on no uFAI, there are still substantial probabilities that cryonics, as a practical matter of actually reviving patients, is likely to fail:

Technology may simply not be applied in that direction. The amount of specific research needed to actually revive patients may exceed the funding available.

Technology as a whole may stop progressing. We've had a lot of success in the last few decades in computing, less in energy, little in transportation, what looks much like saturation in pharmaceuticals - and the lithography advances which have been driving computing look like they have maybe another factor of two to go (unless we get atomically precise nanotechnology - which mostly hasn't been funded)

Perhaps there is a version of "coming to terms with one's mortality" which isn't deathist, and isn't theological, and isn't some vague displacements of one's hopes on to later generations, but is simply saying that hope of increasing one's lifespan by additional efforts isn't plausibly supported by the evidence, and the tradeoff of what one could instead do with that effort.

Comment author: soreff 10 November 2011 03:36:16PM *  3 points [-]

'scuse the self-follow-up...

One other thing that makes me skeptical about "cryonics should work by default":

A large chuck of what makes powerful parts of our society value (at least some) human life is their current inability to manufacture plug-compatible replacements for humans. Neither governments nor corporations can currently build taxpayers or employees. If these structures gained the ability to build human equivalents for the functions that they value, I'd expect that policies like requiring emergency rooms to admit people regardless of ability to pay to be dropped.

Successful revival of cryonics patients requires the ability to either repair or upload a frozen, rather damaged, brain. Either of these capabilities strongly suggests the ability to construct a healthy but blank brain or uploaded equivalent from scratch - but this is most of what is needed to create a plug-compatible replacement for a person (albeit requiring training - one time anyway, and then copying can be used...).

To put it another way: corporations and governments have capabilities beyond what individuals have, and they aren't known for using them humanely. They already are uFAIs, in a sense. Fortunately, for now, they are built of humans as component parts, so they currently can't dispense with us. If technology progresses to the point of being able to manufacture human equivalents, these structures will be free to evolve into full-blown uFAIs, presumably with lethal consequences.

If "by default" includes keeping something like our current social structure, with structures like corporations and governments present, I'd expect that for cryonics patients to be revived, our society would have to hit a very narrow window of technological capability. It would have to be capable of repairing or uploading frozen brains, but not capable of building plug-in human equivalents. This looks inherently improbable, rather than what I'd consider a default scenario.