You are viewing a version of this post published on the . This link will always display the most recent version of the post.

Years ago, I was speaking to someone when he casually remarked that he didn't believe in evolution.  And I said, "This is not the nineteenth century.  When Darwin first proposed evolution, it might have been reasonable to doubt it.  But this is the twenty-first century.  We can read the genes.  Humans and chimpanzees have 98% shared DNA.  We know humans and chimps are related.  It's over."

He said, "Maybe the DNA is just similar by coincidence."

I said, "The odds of that are something like two to the power of seven hundred and fifty million to one."

He said, "But there's still a chance, right?"

Now, there's a number of reasons my past self cannot claim a strict moral victory in this conversation.  One reason is that I have no memory of whence I pulled that 2^(750,000,000) figure, though it's probably the right meta-order of magnitude.  The other reason is that my past self didn't apply the concept of a calibrated confidence.  Of all the times over the history of humanity that a human being has calculated odds of 2^(750,000,000):1 against something, they have undoubtedly been wrong more often than once in 2^(750,000,000) times.  E.g. the shared genes estimate was revised to 95%, not 98%—and that may even apply only to the 30,000 known genes and not the entire genome, in which case it's the wrong meta-order of magnitude.

But I think the other guy's reply is still pretty funny.

I don't recall what I said in further response—probably something like "No"—but I remember this occasion because it brought me several insights into the laws of thought as seen by the unenlightened ones.

It first occurred to me that human intuitions were making a qualitative distinction between "No chance" and "A very tiny chance, but worth keeping track of."  You can see this in the OB lottery debate, where someone said, "There's a big difference between zero chance of winning and epsilon chance of winning," and I replied, "No, there's an order-of-epsilon difference; if you doubt this, let epsilon equal one over googolplex."

The problem is that probability theory sometimes lets us calculate a chance which is, indeed, too tiny to be worth the mental space to keep track of it—but by that time, you've already calculated it.  People mix up the map with the territory, so that on a gut level, tracking a symbolically described probability feels like "a chance worth keeping track of", even if the referent of the symbolic description is a number so tiny that if it was a dust speck, you couldn't see it.  We can use words to describe numbers that small, but not feelings—a feeling that small doesn't exist, doesn't fire enough neurons or release enough neurotransmitters to be felt.  This is why people buy lottery tickets—no one can feel the smallness of a probability that small.

But what I found even more fascinating was the qualitative distinction between "certain" and "uncertain" arguments, where if an argument is not certain, you're allowed to ignore it.  Like, if the likelihood is zero, then you have to give up the belief, but if the likelihood is one over googol, you're allowed to keep it.

Now it's a free country and no one should put you in jail for illegal reasoning, but if you're going to ignore an argument that says the likelihood is one over googol, why not also ignore an argument that says the likelihood is zero?  I mean, as long as you're ignoring the evidence anyway, why is it so much worse to ignore certain evidence than uncertain evidence?

I have often found, in life, that I have learned from other people's nicely blatant bad examples, duly generalized to more subtle cases.  In this case, the flip lesson is that, if you can't ignore a likelihood of one over googol because you want to, you can't ignore a likelihood of 0.9 because you want to.  It's all the same slippery cliff.

Consider his example if you ever you find yourself thinking, "But you can't prove me wrong."  If you're going to ignore a probabilistic counterargument, why not ignore a proof, too?

New Comment
61 comments, sorted by Click to highlight new comments since:

This reminds me of a conversation from Dumb and Dumber.

Lloyd: What are the chances of a guy like you and a girl like me... ending up together? Mary: Well, that's pretty difficult to say. Lloyd: Hit me with it! I've come a long way to see you, Mary. The least you can do is level with me. What are my chances? Mary: Not good. Lloyd: You mean, not good like one out of a hundred? Mary: I'd say more like one out of a million. [pause] Lloyd: So you're telling me there's a chance.

Good post.

[-]Martok110

However: apply 1:1E6 to 260E6 million people in the US in 1994, there's probably 130 couples like them.

Far from the "still not happening even if you flip a (weighted) coin every second since the big bang"- chance in the post, but since Lloyd probably did not do the math and just ignored the actual value... yep, classical example.

In practice, when people say "one in a million" in that kind of context, it's much higher than that. I haven't watched Dumb and Dumber, but I'd be surprised if Lloyd did not, actually, have a decent chance of ending together with Mary.

On one hand, we claim [dumb stuff using made up impossible numbers](https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument) and on the other hand, we dismiss those numbers and fall back on there's-a-chancism.
These two phenomena don't always perfectly compensate one another (as examples show in both posts), but common sense is more reliable that it may seem at first. (I'm not saying it's the correct approach nonetheless.)

You can blame generalizing from fictional evidence too. One-in-a-million chances come true nine times out of ten. I read it in a book somewhere.

Several Pratchett novels play on this.

The OPs acquaintance may have been right not to be convinced by the "1 in an incomprehensibly big number" argument. The domain in question (genes, insentient nature) operates by cause and effect, so there is no such thing as the million other paths evolution could have followed to make men and chimps different.

However the simple evidence of the similarity in the genomes should have been very convincing.

([…] insentient nature) operates by cause and effect, so there is no such thing as the million other paths evolution could have followed to make men and chimps different.

You can't assume the universe is deterministic some of the time and not other parts of the time. According to chaos theory, a tiny change could've caused those million other paths. (But the probability of that, conditional on no Descartes' Demon or similar, is zero, since no events that _didn't_ occur have occurred.)

Many, many different possible sets of gene sequences would explain the world in which we live, therefore we should count them.

I don't understand why you invoke probability theory in a situation where it has no rhetorical value. Your conversation was a rhetorical situation, not a math problem, so you have to evaluate it and calibrate your speech acts accordingly-- or else you get nowhere, which is exactly what happened.

Your argument to your friend was exactly like someone justifying something about their own religion by citing their bible. It works great for people in your own community who already accept your premises. To anyone outside your community, you might as well be singing a tuneless hymn.

Besides that, the refuge available to anyone even within your community is to challenge the way that you have modeled the probability problem. If we change the model, the probabilities are dramatically changed. This is the lesson we get from Lord Kelvin's miscalculation of the age of the Sun, for instance. Arnold Sommerfeld once remarked that the hydrogen atom appeared to be more complex than a grand piano. In a way it is, but not so much once quantum mechanics was better understood. The story of the Periodic Table of Elements is also a story of trying different models.

Mathematics is powerful and pure. Your only little problem is demonstrating-- in terms your audience will value-- that your mathematics actually represents the part of the world you claim it represents. That's why you can't impose closure on everyone else using a rational argument; and why you may need a few other rhetorical tools.

Your confidence in your arguments seems to come from a coherence theory of truth: when facts align in beautiful and consistent ways, that coherence creates a powerful incentive to accept the whole pattern. Annoyingly, there turn out to be many ways to find or create coherence by blurring a detail here, or making an assumption there, or disqualifying evidence. For instance, you consistently disqualify evidence from spiritual intuition, don't you? Me, too. How can we be sure we should be doing that?

Why not learn to live with that? Why not give up the quest for universal closure, and settle for local closure? That's Pyrhhonian skepticism.

Why use probability even in conversations with people who don't understand probability?

Because probability is TRUE. And if people keep hearing about it, maybe they'll actually try to start learning about it.

You're right of course that this needs to be balanced with rhetorical efficiency---we may need to practice some Dark Arts to persuade people for the wrong reasons just to get them to the point where the right reasons can work at all.

The rest of your comment dissolves into irrationality pretty quickly. We do in fact know to very high certainty that "spiritual intuition" is not good evidence, and if you really doubt that we can deluge you with gigabytes of evidence to that effect.

Pyrrhonism is sometimes equated with skepticism, in which case it's stupid and self-defeating; and sometimes it's equated with fallibilism, in which case it's true and in some cases even interesting (many people who cite the Bible's infallibility do not seem to understand that relying on their assessment would be asserting their infallibility), but usually is implicit in the entire scientific method. I don't know which is historically closer to what Pyrrho thought, but nor do I particularly care.

The probability of the sequence : 7822752094267846045605461355143507490091149797709871032440019209442625103982294206404126088435480346

being generated by chance is one in a google.

Is there anyone who wants to conclude that it was not generated by chance?

The simple point: Eliezer's answer to the questioner that no, there's not still a chance, was wrong. In order to draw such a conclusion, he must first show that some other hypothesis will give a greater probability, and this other hypothesis must also have a sufficiently high prior probability.

Naturally, it is easy to satisfy these conditions in the debate between the evolution hypothesis and the random-DNA-coincidence hypothesis. But Eliezer did not do this. He invalidly attempted to conclude from the mere probability of the coincidence hypothesis, without any comparison with another hypothesis, that the coincidence hypothesis was false.

The probability of generating THAT SEQUENCE is enormously, nigh-incomprehensibly tiny.

The probability of generating A SEQUENCE LIKE THAT (which appears as patternless, which contains no useful information, which has a very high information entropy) is virtually 1.

If I generated another sequence and it turned out exactly identical to yours, that would indeed be compelling (indeed, almost incontrovertible) evidence that something other than random chance was at work.

Ironically, in the future we might learn that the first replicator required "chance" in the same order of magnitude.

In an infinite universe, a 1 in 10^^10 event is guaranteed to happen infinitely many times.

It's not guaranteed… but that's pedantry-about-infinity: the chance of it _not_ happening once is zero, the chance if it _not_ happening twice is zero, and so on, and so on.

Unknown: What do we mean by "chance"? That it has a very small a priori probability... The evidence is given: the two sequences are similar. We can also assume that the evolution theory has a bigger probability a priori, than the chance to get that sequence. These insights were all included in the post, I think. So applying Bayes' theorem we get the fact that the evolution version has much bigger a posteriori probability, so we don't have to show that separately.

There are a lot of events which have a priori probabilities in that order of magnitude... But we also should have strong evidences to shift that to a plausible level. But a lot of people think: "there was only a very little chance for this to happen, but it happened => things with very little chances do happen sometimes."

James,

I agree that there's a difference between rhetoric and pure maths. However, you can change models, you can revise probabilities, update beliefs and argue the toss all day, but it doesn't make humans and chimps any less related, any more than you can argue the grass red. It's a good example about Kelvin's model of the sun. However, for it to be applicable, please tell us the chance that some future discovery will demonstrate that the similarities between human and chimp genomes are just a coincidence. See Eliezer's reference to 19th vs 21st century in the post.

The statement 'there's still a chance, right?' is mathematically valid in pretty much every case. The statement 'humans are genetically related to chimps' is rhetoric, and not any sort of Technical Argument in and of itself. However, I know which of these two has more relevance and meaning for me.

Hi,

For people who have a cryonics contract, or intend to get one in the future, fate may literally be hanging off a thin probability. The probability of a revival, maintaining sufficient memory continuity and of a subsequent life worth living are small. The reason that people go in for cryonics (even when the technology was not very advanced) was because small though the probability is, it is not zero. So, I would be very wary of using a epsilon = zero argument.

And about evolution, isn't it just a matter of time before we will be able to genetically work backward from any of today's species to the original ancestors? We know the genome, we can work out the theoritical mutations, we can test and see which of these possible mutations had a high probability. I personally don't worry about creationists for too long because we will have genetically engineered evidence of evolution re-created and irrefutable.

regards, Prakash

I know I'll probably trigger a flamewar...

But I actually don't think cryonics is worth the cost. You could be using that money to cure diseases in the Third World, or investing in technology, or even friendly-AI research if that's your major concern, and you almost certainly will achieve more good according to what I assume is your own utility function (as long as it doesn't value a 1/1 billion chance of you being revived as exactly you over say the lives of 10,000 African children). Also, transhumans will presumably judge the same way, and decide that it's not worth it to research reviving you when they could be working on a Dyson Sphere or something.

Frankly, from what we know about cognitive science, most of the really useful information about your personality is going to disappear upon freezing anyway. You are a PROCESS, not a STATE; as such, freezing you will destroy you, unless we've somehow kept track of all the motions in your brain that would need to be restarted. (Assuming that Penrose is wrong and the important motions are not appreciably quantum. If quantum effects matter for consciousness, we're really screwed, because of the Uncertainty Principle and the no-cloning theorem.) Preserving a human consciousness is like trying to freeze a hurricane.

TLDR with some rhetoric: I've seen too many frozen strawberries to believe in cryonics.

My impression is that there could be a short-term loss from cryonics-- something like having a mild concussion-- but that the vast majority of your memories would survive. Am I missing something?

I know I'll probably trigger a flamewar...

Nitpick: LW doesn't actually have a large proportion of cryonicists, so you're not that likely to get angry opposition. As of the 2011 survey, 47 LWers (or 4.3% of respondents) claimed to have signed up. There were another 583 (53.5%) 'considering it', but comparing that to the current proportion makes me skeptical they'll sign up.

Also, transhumans will presumably judge the same way, and decide that it's not worth it to research reviving you when they could be working on a Dyson Sphere or something.

Diverse transhumans will have diverse interests. Your inclination to think everyone in the future will all focus on one big project to the exclusion of all else is predicted by near/far theory.

Frankly, from what we know about cognitive science, most of the really useful information about your personality is going to disappear upon freezing anyway.

We know it does not disappear when electrical signals cease in the brain due to hypoxia and hypothermia. Furthermore if you look at vitrified brain tissue through an electron microscope you can see the neurons still connected to each other, which is definitely information. Useful? I'm betting it is.

You are a PROCESS, not a STATE; as such, freezing you will destroy you, unless we've somehow kept track of all the motions in your brain that would need to be restarted. ... quantum ... Preserving a human consciousness is like trying to freeze a hurricane.

Your speculation here has been empirically falsified already by the hypothermia cases I just mentioned. Human consciousness routinely stops and resumes no worse for the wear, during sleep and anesthesia.

There's nothing precluding it being both a process and a state, in fact every process on my computer has a state that can be saved and resumed. If you are computer-literate, I don't see why you would think this is much of an argument.

There is also lots of empirical evidence that the brain is an orderly system, not a random one like a hurricane. (This is important if we have to do repairs.)

TLDR with some rhetoric: I've seen too many frozen strawberries to believe in cryonics.

Were they vitrified strawberries? Important difference there.

"The odds of that are something like two to the power of seven hundred and fifty million to one."

As Eliezer admitted, it is a very bad idea to ascribe probabilities like this to real world propositions. I think that the strongest reason is that it is just too easy for the presuppositions to be false or for your thinking to have been mistaken. For example, if I gave a five line logical proof of something, that would supposedly mean that there is no chance that its conclusion is true given the premisses, but actually the chance that I would make a logical error (even a transcription error somewhere) is at least on in a billion (~ 1 in 2^30). There is at least this much chance that either Elizer's reasoning or the basic scientific assumptions were seriously flawed in some way. Given the chance of error in even the simplest logical arguments (let alone the larger chance that the presuppositions about genes etc are false), we really shouldn't ascribe probabilities smaller than 1 in a billion to factual claims at all. Better to say that the probability of this happening by chance given the scientific presuppositions is vanishingly small. Or that the probability of it happening by chance pretty much equals the probability of the presuppositions being false.

Toby.

Toby,

What if there are more than a billion known options?

Carl, that is a good point. I'm not quite sure what to say about such cases. One thing that springs to mind though, is that in realistic examples you couldn't have investigated each of those options to see if it was a real option and even if you could, you couldn't be sure of all of that at once. You must know it through some more general principle whereby there is, say, an option per natural number up to a trillion. However, how certain can you be of that principle? That is isn't really up to only a million?

Hmmmm... Maybe I have an example that I can assert with confidence greater than one minus a billionth:

'The universe does not contain precisely 123,456,678,901,234,567,890 particles.'

I can't think of a sensible, important claim like Eliezer's original one though, and I stand by my advice to be very careful about claiming less than a billionth probability of error, even for a claim about the colour of a piece of paper held in front of you.

Toby.

Ben Jones: "The statement 'there's still a chance, right?' is mathematically valid in pretty much every case."

Exactly. The best answer would have been something like: "There's stil a chance for everything. There is no such thing as zero probabilities in the real world. Maybe I can cure everybody's cancer by wishing for it very, very hard. Sure, this thought violates everything we know about physics, but there is still a chance, no?"

Toby, I think that's a very good point. There is a difficulty in analyzing cases which involve a very large number of alternatives, such as my example of the number selected with odds of one in a google. But I think this difficulty is much like the difficulty of discussing the odds that 2 and 2 make 5; surely this cannot be assigned a probability of zero, and yet if it is assigned any positive probability, you can easily argue that it has a probability of unity.

I think the way to deal with this is to say that a statement can have an indefinitely small calculated probability, but on a human level there is a limit much as you stated, and this should be applied in retrospect even to our calculated probabilities; i.e. even though there is a calculated probability of one in a google that the particular sequence I posted above could be generated by chance, there is a human probability of at least one in a billion that all of my calculations are wrong anyway.

This is one reason why many of Eliezer's claims are overconfident: he seems to identify a calculated probability, or what he supposes the calculated probability would be if there was one, with a human probability.

But I think this difficulty is much like the difficulty of discussing the odds that 2 and 2 make 5; surely this cannot be assigned a probability of zero, and yet if it is assigned any positive probability, you can easily argue that it has a probability of unity.

Given certain assumptions, we can easily assign that a probability of zero.

The problems arise when we lose sight of the fact that we made assumptions.

But what I found even more fascinating was the qualitative distinction between "certain" and "uncertain" arguments, where if an argument is not certain, you're allowed to ignore it. Like, if the likelihood is zero, then you have to give up the belief, but if the likelihood is one over googol, you're allowed to keep it.

I think that's exactly what's going on. These people you speak of who do this are mentally dealing with social permission, not with probability algebra. The non-zero probability gives them social permission to describe it as "it might happen", and the detail that the probability is 1 / googolplex stands a good chance of getting ignored, lost, or simply not appreciated. (Similarly, the tiny uncertainty)

And I don't just mean that it works in conversation. The person who makes this mistake has probably internalized it too.

It struck me that way when I read your opening anecdote. Your interlocutor talked like a lawyer who was planning on bringing up that point in closing arguments - "Mr Yudkowsky himself admitted there's a chance apes and humans are not related" - and not bringing up the minuscule magnitude of the chance, of course.

It seems like it should be impossible to calculate a fudge factor into your calculations to account for the possibility that your calculations are totally wrong, because once you calculate it in it becomes part of your calculations, which could be totally wrong. Maybe I'm missing something here that would become apparent if I actually sat down and thought about the math, so if anybody has already thought about the math and can save me the time, I would appreciate it.

Nominull: It seems like it should be impossible to calculate a fudge factor into your calculations to account for the possibility that your calculations are totally wrong, because once you calculate it in it becomes part of your calculations, which could be totally wrong.

But wrong in which direction? If you don't know, it cancels out of the expectation of the probability. You just have to achieve a state where your meta-uncertainty seems balanced between both sides. Don't worry about justifying it to anyone, and particularly not justifying it to an ideal philosopher of perfect emptiness. Just give it your honest best shot, as a guesser.

Even from a creationist perspective, it doesn't make sense to attribute the similarities to coincidence. A better explanation would be deliberate code reuse.

Of course, from what we know about genetics, God is a very kludgy engineer.

Yes.  Of course I see a lot of the same kinds of weirdness in the low-level implementations of computer programs built with high-level code generators.

Whether our genome was created by a pre-existing intelligence using some kind of advanced creature creation software or arose entirely out of selection pressures over time is difficult to gather evidence on, let alone prove.  But from a computer programmer's point of view it's a pretty awe-inspiring system.  Major adaptability AND major stability AND self-assembling.  

It would be like finding five million lines of computer code stashed away that's capable of rewriting itself for piloting anything from a motorcycle to the space shuttle.  The fact that it's a giant ball of muddy spaghetti makes it hard to manipulate for your own purposes, but doesn't make the end result any less impressive.

I think what's actually going on here is "arguments are soldiers": If the similarity between chimps and humans occurred totally by accident, that would be bad for evolution; evolution is the enemy; therefore I should argue that maybe the similarity between chimps and humans occurred totally by accident.

Never do they stop to think that not only is this obviously untrue, it would also undermine THEIR theory as well. The implicit assumption is that anything bad for my opponent is good for me and vice-versa.

Actually, what exactly are the arguments/evidence that distinguish these two hypotheses?

  • Humans and apes evolved from a common ancestor.
  • God tweaked the ape (or common ancestor) blueprint to create the human blueprint.

I'm pretty new at evolutionary biology so I don't really know... anyone want to point me in the right direction?

And that's kind of the problem with assigning importance to the argument.  If our universe is not, in fact, the top-level reality and has some kind of master controlling its every detail we necessarily only get to his influence to the extent that he wishes us to...

Natural selection molding creatures to match the universe?  We can see that happening pretty well.  

The universe itself being molded to produce a particular type of creature?  How exactly would we even be able to notice that?  

The only thing I can personally think of is that, in such a scenario, a universe where the inhabitants somehow developed the ability to more correctly divine the will of their creator from subtle clues and/or racial memory would be less likely to get mushed up and tossed in the wastepaper basket...

Or religion could be just a random side-effect of evolution that merely doesn't hurt us badly enough to offset the power of our brains...

Perhaps if we someday discover other, unrelated sapient life and it also has religion...  Still wouldn't be proof, but likely to be the most conclusive evidence we could get without either a time machine to go back and see where the old religions really started or some way to look at our universe from outside.

In your friend's defense, I could turn that around:


Years ago, I was speaking to someone when he casually remarked that he believed in evolution. And I said, "This is not the nineteenth century. When Darwin first proposed evolution, it might have been reasonable to believe in it. But this is the twenty-first century. We can look at cells. Cells are hideously complex. It's over."

He said, "Maybe all the features arose by coincidence."

I said, "The odds of that are something like two to the power of seven hundred and fifty million to one."

He said, "But there's still a chance, right?"

Unknown, I agree entirely with your comments about the distinction between the idealised calculable probabilities and the actual error prone human calculations of them.

Nominull, I think you are right that the problem feels somewhat paradoxical. Many things do when considering actual human rationality (a species of 'bounded rationality' rather than ideal rationality). However, there is no logical problem with what you are saying. For most real world claims, we cannot have justifiable degrees of beliefs greater than one minus a billionth. Moreover, I don't have a justifiable degree of belief greater than one minus a billionth in my last statement being true (I'm pretty sure, but I could have made a mistake...). This lack of complete certainty about our lack of complete certainty is just one of the disadvantages of having resource bounds (time, memory, accuracy) on our reasoning. On a practical note, while we cannot completely correct ourselves, merely proposing a safe upper bound to confidence in typical situations, memorizing it as a simple number, and then using it in practice is fairly safe, and likely to improve our confidence estimates.

Toby.

Andy McKenzie -- that was my first thought too. Folks can view the scene here.

Eliezer, do you expect to be right more or less the next google times you calculate or estimate a probability of one in a google? If not, then for such probabilities we do know in which direction the estimate is likely to be mistaken, and so we can correct it, by the means suggested by Toby.

Lemmus,

"Maybe I can cure everybody's cancer by wishing for it very, very hard."

Give it a go. Hell, spend the rest of your life giving it a go. You'll be engaging with reality in much the same way as Eliezer's acquaintance. We ascribe probabilities to things to inform our actions.

Actually, isn't this called "prayer"?

Rather that estimating a probability, it would have been more interesting to ask "What emotional need are you trying to meet with this?"

If Mr Still-a-chance yearns for "souls go to heaven and meet God" why does he care about evolution? Isn't the soul the magic, special sauce that converts an ordinary animal body into a human? How does denying evolution help him?

Meanwhile, 20000000 years in the future, a multi-generation interstellar space ship has set up a colony on a distant planet with existing biology. The colony collapses but man does not go extinct, and 100000 years later they have re-established a civilisation of sorts.

They find that man is not an animal. His biology is entirely distinct. Which goes well with their myths of a double fall, from the sky to the ground and from the golden age to barbarism, but what really do they gain when they find that they do not have genealogical ties to the animals around them. Why is our far future Mr Still-a-chance the 2nd so pleased?

I find myself unable to imagine how Mr Still-a-chance would have answered, which piques my curiosity

One trick that might help here is not considering beliefs themselves but actions upon those beliefs. Just because you have 0.0001% certainty the Moon is made of green cheese doesn't mean you can make 0.0001% of a spaceship and hop over for a meal - you have to either build the ship or not build it, and the expected return from building is going to be somewhat small. Likewise, just because there's a chance that humans and chimpanzees have 98% shared DNA entirely by accident does not mean it's rational to actually act on that chance, even if you're going through the mental effort of actually considering the possibility.

Granted, this approach is likely to just confuse people, perhaps making them think they are "allowed" to hold unlikely beliefs as long as they don't act on said beliefs... but maybe it's worth a try in the right situation?

This is deep wisdom. It also has a lot of resonance with the issue of risk, and what sorts of risks it is rational to take.

(And don't tell me "expected utility", because either the utility is what you'd straightforwardly expect---10000 people = 1 person * 10000---and you run into all sorts of weird conclusions, or else you do what von Neumann and Morgenstern did and redefine "utility" to mean "whatever it is you use to choose". Great; now what do I use to choose?)

@Alan Crowe:

FWIW, having tried that tack a few times, I've always been disappointed. The answer is always along the lines of "I'm not meeting any psychological need, I'm searching sincerely for the truth."

People aren't usually honest enough or self-aware enough to answer this sort of question.

I think James Bach was on the right track here, but did not take this far enough. Eliezer's interlocuter was not able to really articulate his argument. Properly argued, probability is completely irrelevant.

So, let us contemplate the position of a serious, hard science creationist, and I hate to say it, but such people exist. So, this individual can fully agree and admit that how a given body grows and develops depends on its DNA structure, so that indeed it is not surprising that different species that appear morphologically and behaviorally to be somewhat similar, such as various canine species or feline species, or for that matter chimps and humans, even if the older creationists got all in fits about having a monkey for an uncle, and so forth, will have very similar DNA structures.

The issue then is how did this come to be. The evolutionist says that it is due to evolution from common ancestors and so forth. The scientific creationist says, "no," this simply reflects that the intelligent designer set them up this way because DNA controls the growth of individual entities, so similar appearing and behaving species will have more similar DNA, and God (or The Intelligent Designer) made it this way fully consciously in accord with the laws of science, which presumably the same Entity is also fully aware of, whether or not this Entity in fact set up those laws his or herself.

Eliezer: While you didn't specifically say that the guy you were arguing with was a creationist...... The creationists I find myself arguing with wouldn't say that the chimp and human DNA is similar by coincidence. My creationists would say that that the DNA is similar because:

1) DNA is what God used to program characteristics into living things. 2) God decided to make chimpanzees and humans similar. 3) To make them similar, He gave them similar DNA.

Eliezer: hey. how would your response change if arguing with these guys? also, you're awesome. just thought I would let you know.

Let's say you and your friend Suzie bumped into a guy on the street. This guy is holding a red marble in his right hand, and a velvet bag in his left. You and Suzie ask the man what is in the velvet bag. You realize very quickly that he doesn't speak your language. You take the velvet bag to see for yourself what is inside. It contains 19 blue marbles. In fact, it has a sticker on the outside of the bag that says "Contents: 1 red marble, 19 blue marbles". Suzie wonders out loud whether the man looked in the bag and specifically pulled out the red marble or simply pulled out a random marble without looking and it just so happened to be the red marble. Suzie is very, very attractive, by the way.

"Well," you respond to her wonderings, "the chances someone would pull out the red marble if they weren't looking are 1 in 20."

Suzie looks at you, "Hmmm. Well, yes, that is indeed the probability of randomly pulling out the single red marble from a bag of 20 marbles. But I'm not sure that's the probability we are looking for in this situation. Our situation is that the man is actually holding the red marble. Something tells me the probability the man did in fact pull the red marble out randomly, given the fact that the man is holding the red marble, is different than the before-the-action probability that he would pull the red marble randomly from the bag."

Is Suzie's suspicion correct? I would really like to hear Eliezer's answer to this.

Suzie's suspicion is correct in general, though the two could work out the same in certain cases.

We know the probability P(draw red | choose random) = 1/20 What we need to know is P(choose random) where P(~choose random) is the prior probability of cheating. We also need to know P(draw red | ~choose random), the probability of drawing red if you cheat (presumably 1, but not necessarily---maybe it's an unreliable cheating method). From all those, we can solve the system and compute P(chose random | drew red).

What you're asking is whether P(chose random | drew red) = P(draw red | choose random); and in general this is not the case.

Indeed, we get to be so Bayesian we actually use Bayes's Theorem explicitly: P(A|B) = P(B|A) * P(A)/P(B)

Unless the priors are equal P(A) = P(B) [P(draw red) = P(choose random)] those two conditional probabilities will be distinct.

Suzie's suspicion is correct. According to Bayes's theorem, the probability that he pulled it out randomly would be .05 x prior probability that he would pull out a marble randomly / prior probability that he would be found holding the red marble.

In this case it is rather difficult to calculate an exact number. But in Eliezer's case, an exact number is unnecessary; the ".05" in his case is so low that he assumes that the exact number will also be low, regardless of the particular values assigned to "prior probability of pulling out a marble randomly" and "prior probability that he ends up with a red marble."

This reminds me of when I was trying to see if it would be a good idea to buy a lottery ticket. Surely, I thought, I wouldn't miss the weekly dollar for a chance at living a life free of having to worry about what I do for money.

But then I thought to visualize for myself the silliness of spending even one dollar a week on the chances of the lottery. Would you ever expect, even in a hundred years, the lottery numbers of one week to be the exact same as the last week's? Then you should expect no different of your own ticket. I realized then that I would much rather have a definite candy bar instead.

Do note that there are lottery systems that it's  possible to game if you had sufficient funds to buy tickets and the jackpot has gone high enough.

For any system where people pick their own numbers, most people tend to pick numbers that are emotionally significant.  Birth dates and so-forth.  That seriously constrains the pool of numbers that will generate a winner.  Depending on the system, if the jackpot goes high enough it's possible to buy a large number of tickets that aren't part of the typical distribution of numbers people pick and have a reasonable chance of making back more than you spent.

Does take more than a dollar a week in capital outlay though.

When people say "every little bit counts!", I try to argue, "yeah, but only a little." Folks concern themselves with possibility when they should be concerned with probability; with existence when they should be concerned with magnitude.

There's several problems here. First of all, I almost always disregard people who make claims like yours (2^750,000,000:1) about the real world on account that they are almost always wrong or misleading. Specifically, while that sort of odds can exist it almost never exists in a way that would win someone points in a conversation. Such claims are often lies, miscalculations, or misleading. While your friend was equally wrong in considering the odds of such being compatible with "coincidence" as it is used in mathematics, how exactly do you expect to calculate the other type of meaning (maybe by "coincidence" God decided to use mostly the same DNA in humans and chimps)? Is it really fair to say that Probability(chimps and humans share 95-98% DNA given that God exists) = 2^750,000,000:1 or anything remotely close?

Is it really fair to claim that your friend was saying, "Well since you claim 2^750,000,000:1 odds instead of zero, I'm going to go with those odds" as opposed to "Even if you said the odds were zero, I wouldn't believe you because there's a chance you're wrong"? There's plenty of examples of people being certain of things, yet being wrong -- even when they use math.

Is it rational to assign even claimed 10^9:1 odds anything remotely close to actual 10^9:1 odds? Seldom, I should say. I'd give such a claim a probability of something like 0.1%-75% of being flat out wrong, based on the difficulty of the problem, the contentiousness of the problem, my respect for the ability and integrity of the person making the claim, and whether the claim agrees or disagrees with things I know or think I know. Now ideally, if I have the time and capability, I would try doing some of those calculations myself and think a while as to whether those are even the correct calculations, but often claims won't be worth that level of effort.

There's a post by Yvain which addresses more or less this issue.

Time for nitpicking... "Consider his example if you ever you find yourself thinking, “But you can’t prove me wrong.” If you’re going to ignore a probabilistic counterargument, why not ignore a proof, too?" - ...your own argument of certainty being infinity. In cardinal numbers theory the highest infinity (be it aleph-zero or continuum or 2^continuum or whatever) trumps any lower numbers (you can through out all the rational numbers, whose number is aleph-zero, and [0;1] will still be continual), including all natural numbers, and only an infinity of the same size or larger may compete. And I believe that the usual, single-infinity models do the same. If we _could_ have infinite certainty, it would be end-of-story, allowing for no possibility to "put the weight down - yes, down". The problem is, we can't.

[+][anonymous]-50