While searching creationist websites for the half-remembered argument I was looking for, I found what may be my new favorite quote: "Mathematicians generally agree that, statistically, any odds beyond 1 in 10 to the 50th have a zero probability of ever happening."
That reminds me of one of my favourites, from a pro-abstinence blog:
When you play with fire, there is a 50/50 chance something will go wrong, and nine times out of ten it does.
In Terry Pratchett's Discworld series, it is a law of narrative causality that 1 in a million chances work out 9 times out of 10. Some characters once made a difficult thing they were attempting artificially harder, to try to make the probability exactly 1 in a million and invoke this trope.
This question comes up a lot! A fan has come up with a very sensible and helpful chart, in many languages no less! http://www.lspace.org/books/reading-order-guides/
When you play with fire, there is a 50/50 chance something will go wrong, and nine times out of ten it does.
They are only admitting their poor calibration.
Note that someone just gave a confidence level of 10^4478296 to one and was wrong. This is the sort of thing that should never ever happen. This is possibly the most wrong anyone has ever been.
I was in some discussion at SIAI once and made an estimate that ended up being off by something like three hundred trillion orders of magnitude. (Something about giant look-up tables, but still.) Anyone outdo me?
Wow. The worst I've ever done is giving 9 orders of magnitude inside my 90% confidence interval for the velocity of the earth and being wrong. (It turns out the earth doesn't move faster than the speed of light!)
Surely declaring "x is impossible", before witnessing x, would be the most wrong you could be?
I take more issue with the people who incredulously shout "That's impossible!" after witnessing x.
I don't. You can witness a magician, e.g., violating conservation of matter, and still declare "that's impossible!"
Basically, you're stating that you don't believe that the signals your senses reported to you are accurate.
The colloquial meaning of "x is impossible" is probably closer to "x has probability <0.1%" than "x has probability 0"
What should we take for P(X|X) then?
The one that I confess is giving me the most trouble is P(A|A). But I would prefer to call that a syntactic elimination rule for probabilistic reasoning, or perhaps a set equality between events, rather than claiming that there's some specific proposition that has "Probability 1".
and then
Huh, I must be slowed down because it's late at night... P(A|A) is the simplest case of all. P(x|y) is defined as P(x,y)/P(y). P(A|A) is defined as P(A,A)/P(A) = P(A)/P(A) = 1. The ratio of these two probabilities may be 1, but I deny that there's any actual probability that's equal to 1. P(|) is a mere notational convenience, nothing more. Just because we conventionally write this ratio using a "P" symbol doesn't make it a probability.
I'm a bit irked by the continued persistence of "LHC might destroy the world" noise. Given no evidence, the prior probability that microscopic black holes can form at all, across all possible systems of physics, is extremely small. The same theory (String Theory[1]) that has led us to suggest that microscopic black holes might form at all is also quite adamant that all black holes evaporate, and equally adamant that microscopic ones evaporate faster than larger ones by a precise factor of the mass ratio cubed. If we think the theory is talking complete nonsense, then the posterior probability of an LHC disaster goes down, because we favor the ignorant prior of a universe where microscopic black holes don't exist at all.
Thus, the "LHC might destroy the world" noise boils down to the possibility that (A) there is some mathematically consistent post-GR, microscopic-black-hole-predicting theory that has massively slower evaporation, (B) this unnamed and possibly non-existent theory is less Kolmogorov-complex and hence more posterior-probable than the one that scientists are currently using[2], and (C) scientists have completely overlooked this unnamed and possibl...
I wonder how the anti-LHC arguments on this site might look if we substitute cryptography for the LHC. Mathematicians might say the idea of mathematics destroying the world is ridiculous, but after all we have to trust that all mathematicians announcing opinions on the subject are sane, and we know the number of insane mathematicians in general is greater than zero. And anyway, their arguments would (almost) certainly involve assuming the probability of mathematics destroying the world is 0, so should obviously be disregarded. Thus, the danger of running OpenSSH needs to be calculated as an existential risk taking in our future possible light cone. (Though handily, this would be a spectacular tour de force against DRM.) For an encore, we need someone to calculate the existential risk of getting up in the morning to go to work. Also, did switching on the LHC send back tachyons to cause 9/11? I think we need to be told.
One might be tempted to respond "But there's an equal chance that the false model is too high, versus that it is too low."
I'm not sure why one might be tempted to make this response. Is the idea that, when making any calculation at all, one is equally likely to get a number that is too big as one that is too small? But then, that's before you have looked at the number.
Yet another counter-response is that even if the response were true, the false model could be much too high, but it can only be slightly too low, since 1-10^-9 is quite close to 1.
First, great post. Second, general injunctions against giving very low probabilities to things seems to be taken by many casual readers as endorsements of the (bad) behavior "privilege the hypothesis" - e.g. moving the probability from very small to moderately small that God exists. That's not right, but I don't have excellent arguments for why it's not right. I'd love it if you wrote an article on choosing good priors.
Cosma Shalizi has done some technical work that seems (to my incompetent eye) to be relevant:
That is, he takes Bayesian updating, which requires modeling the world, and answers the question 'when would it be okay to use Bayesian updating, even though we know the model is definitely wrong - e.g. too simple?'. (Of course, making your model "not obviously wrong" by adding complexity isn't a solution.)
I am still confused about how small the probability I should use in the God question is. I understand the argument about privileging the hypothesis and about intelligent beings being very complex and fantastically unlikely.
But I also feel that if I tried to use an argument at least that subtle, when applied to something I am at least as confused about as how ontologically complex a first cause should be, to disprove things at least as widely believed as religion, a million times, I would be wrong at least once.
I've got to admit I disagree with a lot of Advancing Certainty. The proper reference class for a modern physicist who is well acquainted with the mistakes of Lord Kelvin and won't do them again is "past scientists who were well acquainted with the mistakes of their predecessors and plan not to do them again", which I imagine has less than a hundred percent success rate and which might have included Kelvin.
It would be a useful exercise to see whether the most rational physicists of 1950 have more successful predictions as of 2000 than the most rational physicists of 1850 did as of 1900. It wouldn't surprise me if this were true, and so, then the physicists of 2000 could justly put themselves in a new reference class and guess they will be even more successful as of 2050 than the 1950ers were in 2000. But if the success rate after fifty years remains constant, I wouldn't want to say "Yeah, well , we've probably solved all those problems now, so we'll do better".
I've got to admit I disagree with a lot of Advancing Certainty
Do you actually disagree with any particular claim in Advancing Certainty, or does it just seem "off" to you in its emphasis? Because when I read your post, I felt myself "disagreeing" (and panicking at the rapid upvoting), but reflection revealed that I was really having something more like an ADBOC reaction. It felt to me that the intent of your post was to say "Boo confident probabilities!", while I tend to be on the side of "Yay confident probabilities!" -- not because I'm in favor of overconfidence, but rather because I think many worries about overconfidence here tend to be ill-founded (I suppose I'm something of a third-leveler on this issue.)
And indeed, when you see people complaining about overconfidence on LW, it's not usually because someone thinks that some political candidate has a 0.999999999 chance of winning an election; almost nobody here would think that a reasonable estimate. Instead, what you get is people saying that 0.0000000001 is too low a probability that God exists -- on the basis of nothing else than general worry about human overconfidence.
I think my...
I definitely did have the "ammunition for the enemy" feeling about your post, and the "belief attire" point is a good one, but I think the broad emotional disagreement does express itself in a few specific claims:
Even if you were to control for getting tired and hungry and so on, even if you were to load your intelligence into a computer and have it do the hard work, I still don't think you could judge a thousand such trials and be wrong only once. I admit this may not be as real a disagreement as I'm thinking, because it may be a confusion on what sort of reference class we should use to pick trials for you.
I think we might disagree on the Lord Kelvin claim. I think I would predict more of today's physical theories are wrong than you would.
I think my probability that God exists would be several orders of magnitude higher than yours, even though I think you probably know about the same number of good arguments on the issue as I do.
Maybe our disagreement can be resolved empirically - if we were to do enough problems where we gave confidence levels on questions like "The area of Canada is greater than the area of the Mediterranean Sea" and use l...
This raises the question: Should scientific journals adjust the p-value that they require from an experiment, to be no larger than the probability (found empirically) that a peer-reviewed article contains a factual, logical, methodological, experimental, or typographical error?
I don't think the lottery is an exception. There's a chance that you misheard and they said "million", not "billion".
There are really two claims here. The first one -- that if some guy on the Internet has a model predicting X with 99.99% certainty, then you should assign less probability to X, absent other evidence -- seems interesting, but relatively easy to accept. I'm pretty sure I've been reasoning this way in the past.
The second claim is exactly the same, but applied to oneself. "If I have come up with an argument that predicts X with 99.99% certainty, I should be less than 99.99% certain of X." This is not something that people do by default. I doubt that...
But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.
Only to the extent you didn't trust in the statement other than because this model says it's probably true. It could be that you already believe in the statement strongly, and so your external level of confidence should be higher than the model suggests, or the same, etc. Closer to the prior, in other words, and on strange questions intuitive priors can be quite extreme.
Another voting example; "Common sense and statistics", Andrew Gelman:
...A paper* was published in a political science journal giving the probability of a tied vote in a presidential election as something like 10^-90**. Talk about innumeracy! The calculation, of course (I say “of course” because if you are a statistician you will likely know what is coming) was based on the binomial distribution with known P. For example, Obama got something like 52% of the vote, so if you take n=130 million and P=0.52 and figure out the probability of an exact tie
This one seems pretty relevant here:
Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes - Toby Ord, Rafaela Hillerbrand, Anders Sandberg
This is not a fully general argument against giving very high levels of confidence:
It seems to me we can use the very high confidence levels and our understanding of the area in question to justify ignoring, heavily discounting, or accepting the arguments. We can do this on the basis that it takes a certain amount of evidence to actually produce accurate beliefs.
In the case of the creationist argument, a confidence level of 10^4,478,296 to 1 requires (really) roughly 12,000,000 bits of evidence. (10^4,000,000 =~ 2^12,000,000). The creationist presents t...
This was predictable: this was a simple argument in a complex area trying to prove a negative, and it would have been presumptous to believe with greater than 99% probability that it was flawless. If you can only give 99% probability to the argument being sound, then it can only reduce your probability in the conclusion by a factor of a hundred, not a factor of 10^20.
As I recall, there was a paper in 2008 or 2009 about the LHC problem which concluded effectively that the tiny errors that an analysis was incorrectly carried out cumulatively put a high fl...
Very interesting principle, and one which I will bear in mind since I very recently had a spectacular failure to apply it.
What happens if we apply this type of thinking to Bayesian probability in general? It seems like we have to assign a small amount of probability to the claim that all our estimates are wrong, and that our methods for coming to those estimates are irredeemably flawed. This seems problematic to me, since I have no idea how to treat this probability, we can't use Bayesian updating on it for obvious reasons.
Anyone have an idea about how to deal with this? Preferably a better idea than "just don't think about it" which is my current strategy.
Great post!
The moment the topic came up, I also thought back to something I once heard a creationist say. Most amusingly, not only did that probability have some fatuously huge order of magnitude, its mantissa was quoted to about 5 decimal places.
One gets 'target confusion' in such cases - shall I point out that no engineer would ever quote a probability like that to their boss, on pain of job loss? Shall I ask if my interlocutor even knows what a "power" IS?
This is at best weakly related to the statistics of error in a communications channel. Here, simulations are often used to run trillions of trials to simulate (monte carlo calculate) the conditions to get bit error rates (BER) of 10^-7, 10^-8, and so on. As an engineer more familiar with the physical layer (transistor amplifiers, thermal noise in channels, scattering of RF etc), I know that the CONDITIONS for these monte carlo calculations to mean something in the real circuits are complex and not as common as the new PhD doing the calculation thinks the...
We have hypothesis H and evidence E, and we dutifully compute
P(H) * P(E | H) / P(E)
It sounds like your advice is: don't update yet! Especially if this number is very small. We might have made a mistake. But then how should we update? "Round up" seems problematic.
One might be tempted to respond "But there's an equal chance that the false model is too high, versus that it is too low." Maybe there was a bug in the computer program, but it prevented it from giving the incumbent's real chances of 999,999,999,999 out of a trillion.
I have a different response to this than the one you gave.
Consider your meta ("outside") uncertainty over log-odds, in which independent evidence can be added, instead of probabilities. A distribution that averages out to the "internal" log-odds would, when tra...
Splitting it by internal/external is a nice system.
I think people do this instinctively in real life. Exhibit A: people buy lottery tickets. My theory for this is that they know that the odds of winning are too low to justify buying a ticket assuming it is actually fully random. However, most people are willing to put the probability that karma, divine justice, God's plan or their lucky ritual might swing the lottery in their direction at some nonzero value. If they believe in one of these things with even 1% certainty then the ticket is a good deal for them.
On the LHC black holes vs cosmic ray black holes, both kinds of black holes emerge with nonzero charge and will very rapidly brake to a halt. And there's cosmic rays hitting neutron stars, as well, and cosmic rays colliding in the magnetic field of neutron stars, LHC style. Bottom line is, the HLC has to be extremely exceptional to destroy the earth. It just doesn't look this exceptional.
The thing is that a very tiny black hole has incredibly low accretion rate (quite reliable argument here; it takes a long time to push Earth through a needle's eye, even a...
The map being distinct from the territory, you must go outside your map to discount your probability calculations made in the map. But how to do this? You must resort to a stronger map. But then the calculations there are subject to the errors in designing that map.
You can run this logic down to the deepest level. How does a rational person adopt a Bayesian methodology? Is there not some probability that the choice of methodology is wrong? But how do you conceive of that probability, when Bayesian considerations are the only ones available to evaluate truth from given evidence?
Why don't these considerations prove that Bayesian epistemology isn't the true account of knowledge?
...In order for a single cell to live, all of the parts of the cell must be assembled before life starts. This involves 60,000 proteins that are assembled in roughly 100 different combinations. The probability that these complex groupings of proteins could have happened just by chance is extremely small. It is about 1 chance in 10 to the 4,478,296 power. The probability of a living cell being assembled just by chance is so small, that you may as well consider it to be impossible. This means that the probability that the living cell is created by an intellig
I speculate there's at least two problems with the creationism odds calculation. First, it looks like the person doing the calculation was working with maybe 60,000 protein molecules rather than zillions of protein molecules.
The second problem I'm having trouble putting precisely in words, concerning the use of the uniform distribution as a prior. Sometimes the use of the uniform distribution as a prior seems to me to be entirely justified. An example of this is where there is a well-constructed model as to subsequent outcomes.
Other times, when the model f...
This is a misinterpretation. The argument goes like this:
True statement: There is lots of evidence or cells. P(Evidence|Cells)/P(Evidence|~Cells)>>1.
False statement: Without intelligent design, cells could only be produced by random chance. P(Cells|~God) is very very small.
Debatable statement: P(Cells|God) is large.
Conclusion: We update massively in favor of God and against ~God, because of, not in opposition to, the massive evidence in favor of the existence of cells.
This is valid Bayesian updating, it's just that the false statement is false.
The argument was that since cosmic rays have been performing particle collisions similar to the LHC's zillions of times per year, the chance that the LHC will destroy the world is either literally zero,
This argument doesn't work for anthropic reasons. It could be that in the vast majority of Everett branches Earth was wiped out by cosmic ray collisions.
Anthropic reasoning only goes this far. Even if I accept the silliness in which zillion of Earths are destroyed every year for each one that survives... the other planets in the solar system could also have been destroyed. And the stars and galaxies in the sky would all be devoured by now, no? And no anthropic reasons would prevent us from witnessing that from a safe distance.
Here's a fun game: Try to disprove the hypothesis that every single time someone says "Abracadabra" there's a 99.99% chance that the world gets destroyed.
Here's a fun game: Try to disprove the hypothesis that every single time someone says "Abracadabra" there's a 99.99% chance that the world gets destroyed.
We haven't been anthropically forced into a world where humans can't say "Abracadabra".
This is totally testable. I'm going to download some raw quantum noise. If the first byte isn't FF I will say the magic word. I will then report back what the first byte was.
Update: the first byte was 1B
...
Abracadabra.
Still here.
"This person believes he could make one statement about an issue as difficult as the origin of cellular life per Planck interval, every Planck interval from the Big Bang to the present day, and not be wrong even once" only brings us to 1/10^61 or so."
Wouldn't that be 1/ 2^(10^61) or am I missing something?
I'm a bit irked by the continued persistence of "LHC might destroy the world" noise. Given no evidence, the prior probability that microscopic black holes can form at all, across all possible systems of physics, is extremely small. The same theory (String Theory[1]) that has led us to suggest that microscopic black holes might form is also quite adamant that all black holes evaporate, and just as adamant that microscopic ones evaporate faster than larger ones, by a precise factor of the mass ratio cubed. If we think the theory is talking compl...
Finally, consider the question of whether you can assign 100% certainty to a mathematical theorem for which a proof exists
To ground this issue in more concrete terms, imagine you are writing an algorithm to compress images made up of 8-bit pixels. The algorithm plows through several rows until it comes to a pixel, and predicts that the distribution of that pixel is Gaussian with mean of 128 and variance of .1. Then the model probability that the real value of the pixel is 255 is some astronomically small number - but the system must reserve some probabi...
But it's hard for me to be properly outraged about this, because the conclusion that the LHC will not destroy the world is correct.
What is your argument for claiming that the LHC will not destroy the world?
That the world still exists albeit ongoing experiments is easily explained by the fact that we are necessarily living in those branches of the universe where the LHC didn't destroy the world. (On an related side note: Has the great filter been found yet?)
Related to: Infinite Certainty
Suppose the people at FiveThirtyEight have created a model to predict the results of an important election. After crunching poll data, area demographics, and all the usual things one crunches in such a situation, their model returns a greater than 999,999,999 in a billion chance that the incumbent wins the election. Suppose further that the results of this model are your only data and you know nothing else about the election. What is your confidence level that the incumbent wins the election?
Mine would be significantly less than 999,999,999 in a billion.
When an argument gives a probability of 999,999,999 in a billion for an event, then probably the majority of the probability of the event is no longer in "But that still leaves a one in a billion chance, right?". The majority of the probability is in "That argument is flawed". Even if you have no particular reason to believe the argument is flawed, the background chance of an argument being flawed is still greater than one in a billion.
More than one in a billion times a political scientist writes a model, ey will get completely confused and write something with no relation to reality. More than one in a billion times a programmer writes a program to crunch political statistics, there will be a bug that completely invalidates the results. More than one in a billion times a staffer at a website publishes the results of a political calculation online, ey will accidentally switch which candidate goes with which chance of winning.
So one must distinguish between levels of confidence internal and external to a specific model or argument. Here the model's internal level of confidence is 999,999,999/billion. But my external level of confidence should be lower, even if the model is my only evidence, by an amount proportional to my trust in the model.
Is That Really True?
One might be tempted to respond "But there's an equal chance that the false model is too high, versus that it is too low." Maybe there was a bug in the computer program, but it prevented it from giving the incumbent's real chances of 999,999,999,999 out of a trillion.
The prior probability of a candidate winning an election is 50%1. We need information to push us away from this probability in either direction. To push significantly away from this probability, we need strong information. Any weakness in the information weakens its ability to push away from the prior. If there's a flaw in FiveThirtyEight's model, that takes us away from their probability of 999,999,999 in of a billion, and back closer to the prior probability of 50%
We can confirm this with a quick sanity check. Suppose we know nothing about the election (ie we still think it's 50-50) until an insane person reports a hallucination that an angel has declared the incumbent to have a 999,999,999/billion chance. We would not be tempted to accept this figure on the grounds that it is equally likely to be too high as too low.
A second objection covers situations such as a lottery. I would like to say the chance that Bob wins a lottery with one billion players is 1/1 billion. Do I have to adjust this upward to cover the possibility that my model for how lotteries work is somehow flawed? No. Even if I am misunderstanding the lottery, I have not departed from my prior. Here, new information really does have an equal chance of going against Bob as of going in his favor. For example, the lottery may be fixed (meaning my original model of how to determine lottery winners is fatally flawed), but there is no greater reason to believe it is fixed in favor of Bob than anyone else.2
Spotted in the Wild
The recent Pascal's Mugging thread spawned a discussion of the Large Hadron Collider destroying the universe, which also got continued on an older LHC thread from a few years ago. Everyone involved agreed the chances of the LHC destroying the world were less than one in a million, but several people gave extraordinarily low chances based on cosmic ray collisions. The argument was that since cosmic rays have been performing particle collisions similar to the LHC's zillions of times per year, the chance that the LHC will destroy the world is either literally zero, or else a number related to the probability that there's some chance of a cosmic ray destroying the world so miniscule that it hasn't gotten actualized in zillions of cosmic ray collisions. Of the commenters mentioning this argument, one gave a probability of 1/3*10^22, another suggested 1/10^25, both of which may be good numbers for the internal confidence of this argument.
But the connection between this argument and the general LHC argument flows through statements like "collisions produced by cosmic rays will be exactly like those produced by the LHC", "our understanding of the properties of cosmic rays is largely correct", and "I'm not high on drugs right now, staring at a package of M&Ms and mistaking it for a really intelligent argument that bears on the LHC question", all of which are probably more likely than 1/10^20. So instead of saying "the probability of an LHC apocalypse is now 1/10^20", say "I have an argument that has an internal probability of an LHC apocalypse as 1/10^20, which lowers my probability a bit depending on how much I trust that argument".
In fact, the argument has a potential flaw: according to Giddings and Mangano, the physicists officially tasked with investigating LHC risks, black holes from cosmic rays might have enough momentum to fly through Earth without harming it, and black holes from the LHC might not3. This was predictable: this was a simple argument in a complex area trying to prove a negative, and it would have been presumptous to believe with greater than 99% probability that it was flawless. If you can only give 99% probability to the argument being sound, then it can only reduce your probability in the conclusion by a factor of a hundred, not a factor of 10^20.
But it's hard for me to be properly outraged about this, since the LHC did not destroy the world. A better example might be the following, taken from an online discussion of creationism4 and apparently based off of something by Fred Hoyle:
Note that someone just gave a confidence level of 10^4478296 to one and was wrong. This is the sort of thing that should never ever happen. This is possibly the most wrong anyone has ever been.
It is hard to say in words exactly how wrong this is. Saying "This person would be willing to bet the entire world GDP for a thousand years if evolution were true against a one in one million chance of receiving a single penny if creationism were true" doesn't even begin to cover it: a mere 1/10^25 would suffice there. Saying "This person believes he could make one statement about an issue as difficult as the origin of cellular life per Planck interval, every Planck interval from the Big Bang to the present day, and not be wrong even once" only brings us to 1/10^61 or so. If the chance of getting Ganser's Syndrome, the extraordinarily rare psychiatric condition that manifests in a compulsion to say false statements, is one in a hundred million, and the world's top hundred thousand biologists all agree that evolution is true, then this person should preferentially believe it is more likely that all hundred thousand have simultaneously come down with Ganser's Syndrome than that they are doing good biology5
This creationist's flaw wasn't mathematical; the math probably does return that number. The flaw was confusing the internal probability (that complex life would form completely at random in a way that can be represented with this particular algorithm) with the external probability (that life could form without God). He should have added a term representing the chance that his knockdown argument just didn't apply.
Finally, consider the question of whether you can assign 100% certainty to a mathematical theorem for which a proof exists. Eliezer has already examined this issue and come out against it (citing as an example this story of Peter de Blanc's). In fact, this is just the specific case of differentiating internal versus external probability when internal probability is equal to 100%. Now your probability that the theorem is false is entirely based on the probability that you've made some mistake.
The many mathematical proofs that were later overturned provide practical justification for this mindset.
This is not a fully general argument against giving very high levels of confidence: very complex situations and situations with many exclusive possible outcomes (like the lottery example) may still make it to the 1/10^20 level, albeit probably not the 1/10^4478296. But in other sorts of cases, giving a very high level of confidence requires a check that you're not confusing the probability inside one argument with the probability of the question as a whole.
Footnotes
1. Although technically we know we're talking about an incumbent, who typically has a much higher chance, around 90% in Congress.
2. A particularly devious objection might be "What if the lottery commissioner, in a fit of political correctness, decides that "everyone is a winner" and splits the jackpot a billion ways? If this would satisfy your criteria for "winning the lottery", then this mere possibility should indeed move your probability upward. In fact, since there is probably greater than a one in one billion chance of this happening, the majority of your probability for Bob winning the lottery should concentrate here!
3. Giddings and Mangano then go on to re-prove the original "won't cause an apocalypse" argument using a more complicated method involving white dwarf stars.
4. While searching creationist websites for the half-remembered argument I was looking for, I found what may be my new favorite quote: "Mathematicians generally agree that, statistically, any odds beyond 1 in 10 to the 50th have a zero probability of ever happening."
5. I'm a little worried that five years from now I'll see this quoted on some creationist website as an actual argument.