Thanks, Eliezer. Helpful post.
I have personally witnessed a room of people nod their heads in agreement with a definition of a particular term in software testing. Then when we discussed examples of that term in action, we discovered that many of us having agreed with the words in the definition, had a very different interpretation of those words. To my great discouragement, I learned that agreeing on a sign is not the same as agreeing on the interpretant or the object. (sign, object, and interpretant are the three parts of Peirce's semiotic triangle)
In the case of 2+2=4, I think I know what that means, but when Euclid, Euler, or Laplace thought of 2+2=4, were they thinking the same thing I am? Maybe they were, but I'm not confident of that. And when someday a artificial intelligence ponders 2+2=4, will it be thinking what I'm thinking?
I feel 100% positive that 2+2=4 is true, and 100% positive that I don't entirely know what I mean by "2+2=4". I am also not entirely sure what other people mean by it. Maybe they mean "any two objects, combined with two objects, always results in four objects", which is obviously not true.
In thinking about certainty, it helps me ...
We can go even stronger than mathematical truths. How about the following statement?
~(P &~P)
I think it's safe to say that if anything is true, that statement (the flipping law of non-contradiction) is true. And it's the precondition for any other knowledge (for no other reason than if you deny it, you can prove anything). I mean, there are logics that permit contradictions, but then you're in a space that's completely alien to normal reasoning.
So that's lots stronger than 2+2=4. You can reason without 2+2=4. Maybe not very well, but you can do it....
If you get past that one, I'll offer you another.
"There is some entity [even if only a simulation] that is having this thought." Surely you have a probability of 1 in that. Or you're going to have to answer to Descartes's upload, yo.
Also (and sorry for the rapid-fire commenting), do you accept that we can have conditional probabilities of one? For example, P(A|A)=1? And, for that matter, P(B|(A-->B, A))=1? If so, I believe I can force you to accept at least probabilities of 1 in sound deductive arguments. And perhaps (I'll have to think about it some more) in the logical laws that get you to the sound deductive arguments. I'm just trying to get the camel's nose in the tent here...
The same holds for mathematical truths. It's questionable whether the statement "2 + 2 = 4" or "In Peano arithmetic, SS0 + SS0 = SSSS0" can be said to be true in any purely abstract sense, apart from physical systems that seem to behave in ways similar to the Peano axioms.
Why is that important?
Let me ask you in reply, Paul, if you think you would refuse to change your mind about the "law of non-contradiction" no matter what any mathematician could conceivably say to you - if you would refuse to change your mind even if every mathematician on Earth first laughed scornfully at your statement, then offered to explain the truth to you over a couple of hours... Would you just reply calmly, "But I know I'm right," and walk away? Or would you, on this evidence, update your "zero probability" to something somewhat higher?
Why can't I repose a very tiny credence in the negation of the law of non-contradiction? Conditioning on this tiny credence would produce various null implications in my reasoning process, which end up being discarded as incoherent - I don't see that as a killer objection.
In fact, the above just translates the intuitive reply, "What if a mathematician convinces me that 'snow is white' is both true and false? I don't consider myself entitled to rule it out absolutely, but I can't imagine what else would follow from that, so I'll wait until it happens to worry about it."
As for Descartes's little chain of reasoning, it in...
Huh, I must be slowed down because it's late at night... P(A|A) is the simplest case of all. P(x|y) is defined as P(x,y)/P(y). P(A|A) is defined as P(A,A)/P(A) = P(A)/P(A) = 1. The ratio of these two probabilities may be 1, but I deny that there's any actual probability that's equal to 1. P(|) is a mere notational convenience, nothing more. Just because we conventionally write this ratio using a "P" symbol doesn't make it a probability.
Hah, I'll let Decartes go (or condition him on a workable concept of existence -- but that's more of a spitball than the hardball I was going for).
But in answer to your non-contradiction question... I think I'd be epistemically entitled to just sneer and walk away. For one reason, again, if we're in any conventional (i.e. not paraconsistent) logic, admitting any contradiction entails that I can prove any proposition to be true. And, giggle giggle, that includes the proposition "the law of non-contradiction is true." (Isn't logic a beautiful th...
Wait a second, conditional probabilities aren't probabilities? Huhhh? Isn't Bayesianism all conditional probabilities?
P(P is never equal to 1) = ?
I know, I know, 'this statement is not true'. But we've long since left the real world anyway. However, if you tell me the above is less than one, that means that in some cases, infinite certainty can exist, right?
Get some sleep first though Eliezer and Paul. It's 9.46am here.
For one reason, again, if we're in any conventional (i.e. not paraconsistent) logic, admitting any contradiction entails that I can prove any proposition to be true.
Yes, but conditioned on the truth of some statement P&~P, my probability that logic is paraconsistent is very high.
Bayesianism is all about ratios of probabilities, yes, but we can write these ratios without ever using the P(|) notation if we please.
"I'd listen to the argument solely in order to refute it."
Paul refutes the data! Eliezer, an idiot disagreeing with you shouldn't necessarily shift your beliefs at all. By that token, there's no reason to shift your beliefs if the whole world told you 2 + 2 were 3, unless they showed some evidence. I would think it vastly more likely that the whole world was pulling my leg.
Assert a confidence of (1 - 1/googolplex) and your ego far exceeds that of mental patients who think they're God.
So we are considering the possibility of brain malfunctions, and deities changing reality. Fine. But what is the use of having a strictly accurate Bayesian reasoning process when your brain is malfunctioning and/or deities are changing the parameters of reality?
Eliezer, I want to complement you on this post. But I would suggest that you apply it more generally, not only to mathematics. For example, it seems to me that any of us should be (or rather, could be after thinking about it for a while) more sure that 53 is a prime number than that a creationist with whom we disagree is wrong. This seems to imply that our certainty of the theory of evolution shouldn't be more than 99.99%, according to your figure, definitely less than a string of nines as long as the Bible (as you have rhetorically suggested in the past.)
Paul Gowder said:
"We can go even stronger than mathematical truths. How about the following statement?
~(P &~P)
I think it's safe to say that if anything is true, that statement (the flipping law of non-contradiction) is true."
Amusingly, this is one of the more controversial tautologies to bring up. This is because constructivist mathematicians reject this statement.
Gray Area said: "Amusingly, this is one of the more controversial tautologies to bring up. This is because constructivist mathematicians reject this statement."
Actually constructivist mathematicians reject the law of the excluded middle, (P v ~P), not the law of non-contradiction (they are not equivalent in intuitionistic logic, the law of non-contradiction is actually equivalent to the double negation of the excluded middle).
The ratio of these two probabilities may be 1, but I deny that there's any actual probability that's equal to 1. P(|) is a mere notational convenience
I'd have to diagree with that. The axioms I've seen of probabilty/measure theory do not make the case that P() is a probability while P(|) is not - they are both, ulitmately, the same type of object (just taken from different measurable sets).
However, you don't need to appeal to this type of reasoning to get rid of P(A|A) = 1. Your probability of correctly remembering the beginning of the statement when reach...
If you say 99.9999% confidence, you're implying that you could make one million equally fraught statements, one after the other, and be wrong, on average, about once.
Excellent post overall, but that part seems weakest - we suffer from an unavailability problem, in that we can't just think up random statements with those properties. When I said I agreed 99.9999% with "P(P is never equal to 1)" it doesnt't mean that I feel I could produce such a list - just that I have a very high belief that such a list could exist.
An intermediate position would be to come up with a hundred equally fraught statements in a randomly chosen narrow area, and extrapoltate from that result.
Stuart: When I said I agreed 99.9999% with "P(P is never equal to 1)" it doesnt't mean that I feel I could produce such a list - just that I have a very high belief that such a list could exist.
So, using Eliezer's logic, would you expect that one time in a million, you'd get this wrong, and P = 1? I don't need to you to produce a list. This is a case where no number of 9s will sort you out - if you assign a probability less than 1, you expect to be in error at some point, which leaves you up the creek. If I'm making a big fat error (and I fear I may be), someone please set me straight.
Mr. Bach,
I think you're right to point out that "number" meant a different thing to the Greeks; but I think that should make us more, not less, confident that "2+2=4." If the Greeks had meant the same thing by number as modern mathematicians do, than they were wrong to be very confident that the square root of negative one was not a number. However, the square root of negative one does in fact fall short of being a simple, definite multitude -- what Euclid, at least, meant by number. So if they were in error, it was the practical err...
Ben, you're making an obvious error: you are taking the statement that "P never equals 1" has a probability of less than 1 to mean that in some proportion of cases, we expect the probability to equal 1. This would be the same as supposing that assigning the light-speed limit a probability of less than 1 implies that we think that the speed of light is sometimes exceeded.
But it doesn't mean this, it means that if we were to enunciate enough supposed physical laws, we would sometimes be mistaken. In the same way, a probability of less than 1 for th...
There are uncountably many possible worlds. Using standard real-number-valued probabilities, we have to assign probability zero to (I think) almost all of them. In other words, for almost all of the possible worlds, the probability of the complement of that possible world is 1.
(Are there ways around this, perhaps using non-real-valued probabilities?)
(Waking up.) Sure, if I thought I had evidence (how) of P&~P, that would be pretty good reason to believe a paraconsistent logic was true (except what does true mean in this context? not just about logics, but about paraconsistent ones!!)
But if that ever happened, if we went there, the rules for being rational would be so radically changed that there wouldn't necessarily be good reason to believe that one has to update one's probabilities in that way. (Perhaps one could say the probability of the law of non-contradiction being true is both 1 and 0? ...
The proposition in which I repose my confidence is the proposition that "2 + 2 = 4 is always and exactly true", not the proposition "2 + 2 = 4 is mostly and usually true".
I have confused the map with the territory. Apologies. Revised claim: I believe, with 99.973% probability, that P cannot equal 1, 100% of the time! I believe very strongly that I am correct, and if I am correctly, I am completely correct. But I'm not sure. Much better.
I suppose we should be asking ourselves why we tend to try hard to retain the ability to be 100% sure. A long long list of reasons spring to mind....
Well, the real reason why it is useful in arithmetic to accept that 2+2=4 is that this is part of a deeper relation in the arithmetic field regarding relations between the three basic arithmetic operations: addition, multiplication, and exponentiation. Thus, 2 is the solution to the following question: what is x such that x plus x equals x times x equals x to the x power? And, of course, all of these operations equal 4.
Can someone write/has someone written a program that simulates existence in a world in which 2+2=4 (and the rest of Peano arithmetic) is useless i.e. it corresponds to no observable phenomenon in that world?
Oh, on the ratios of probabilities thing, whether we call them probabilities or schmobabilities, it still seems like they can equal 1. But if we accept that there are schmobabilities that equal 1, and that we are warranted in giving them the same level of confidence that we'd give probabilities of 1, isn't that good enough?
Put a different way, P(A|A)=1 (or perhaps I should call it S(A|A)=1) is just equivalent to yet another one of those logical tautologies, A-->A. Which again seems pretty hard to live without. (I'd like to see someone prove NCC to me without binding me to accept NCC!)
Well, the deeper issue is "Must we rely on the Peano axioms?" I shall not get into all the Godelian issues that can arise, but I will note that by suitable reinterpretations, one can indeed pose real world cases where an "apparent two plus another apparent two" do not equal "apparent four," without being utterly ridiculous. The problem is that such cases are not readily amenable to being easily put together into useful axiomatic systems. There may be something better out there than Peano, but Peano seems to work pretty well an awful lot.
As for "what is really true?" Well... . . . .
Z._M._Davis: No. Why? Because I said so ;-)
Point taken, I need to better constrain the problem. So, how about, "It must be able to sustain transfer of information between two autonomous agents." But then I've used the concept of "two", autonomous agent. eek!
So a better specification would be, "The world must contain information." Or, more rigorously, "The world must have observable phenomena that aid in predicting future phenomena."
Now, can such a simulated world exist? And is there a whole branch of philosphy addressing this problem that I need to brush up on?
It's nice that you're honest and open about the fact that your position presupposes an exceptionally weird sort of skepticism (hence the need to fall back on the possibility of being in The Matrix). Since humans are finite, there's no reason to think absolute confidence in everything isn't attainable, just innumerate the biases. Only by positing some weird sort of subjectivism can you get the sort of infinite regress needed to discount the possibility; I can never really know because I'm trapped inside my head. Why is the uncertainty fetish so appealing that people will entertain such weird ideas to retain it?
Why is the uncertainty fetish so appealing that people will entertain such weird ideas to retain it?
Why is the certainty fetish so appealing that people will ignore the obvious fact that all conclusions are contingent?
Poke, consideration of the possibility of being in the matrix doesn't necessarily require "an exceptionally weird sort of skepticism." It might only require an "exceptionally weird" form of futurism.
If I correctly remember my Jesuit teachers' explanation from 40 years ago, the epistomological branch of classical philosophy deals thusly with this situation: an "a priori" assertion is one which exhibits the twin characteristics of universality and necessity. 2+2=4 would be such an assertion. Should there ever be an example which violates this a priori assertion, it is simply held to be unreal, because reality is a construct of consensus. Consensus dictates to reality but not to experience. So if, for example, you see a ghost or are abducted by a UFO, you're simply out of contact with reality, and, as a crazy person, you can't legitimately challenge what the rest of us hold to be indisputably true.
Should there ever be an example which violates this a priori assertion, it is simply held to be unreal, because reality is a construct of consensus.
I hope the gentleman got better.
Eli said:
Peter de Blanc has an amusing anecdote on this point, which he is welcome to retell in the comments.
I'm sorry. Eliezer, can you please explain to me what you mean when you say the how certain you are (probability %) that something is true? I've studied a lot of statistics, but I really have no idea what you mean.
If I say that this fair coin in my hand has a 50% chance of coming up heads, then that means that if I flip it a lot of times, then it'll be heads 50% of the times. I can do that with a lot of real, measurable things.
So, what do you mean by, you are 99% certain of something?
It means that, given Eliezer's knowledge, the probabilities of the necessary preconditions for the state in question multiplied together yield 0.99.
If you have a coin that you believe to be fair, and you flip it, how likely do you think it is that it will land on edge?
Q, Eliezer's probabilities are Bayesian probabilities. (Note the "Bayesian" tag on the post.)
Q: let's say I offer you a choice between (a) and (b).
a. Tomorrow morning you can flip that coin in your hand, and if it comes up heads, then I'll give you a dollar. b. Tomorrow morning, if it is raining, then I will give you a dollar.
If you choose (b) then your probability for rain tomorrow morning must be higher than 1/2.
Well... kinda. It could just be that if it rains, you will need to buy a $1 umbrella, but if it doesn't rain then you don't need money at all. It would be nice if we had some sort of measurement of reward that didn't depend on the situat...
No, no, no. Three problems, one in the analogy and two in the probabilities.
First, an individual particle can briefly exceeed the speed of light; the group velocity cannot. Go read up on Cerenkov radiation: It's the blue glow created by (IIRC) neutrons briefly breaking through c, then slowing down. The decrease in energy registers as emitted blue light.
Second: conditional probabilities are not necessarily given by a ratio of densities. You're conditioning on (or working with) events of measure-zero. These puzzlers are why measure theory exists -- to...
First, an individual particle can briefly exceeed the speed of light; the group velocity cannot. Go read up on Cerenkov radiation: It's the blue glow created by (IIRC) neutrons briefly breaking through c, then slowing down. The decrease in energy registers as emitted blue light.
Breaking through the speed of light in a medium, but remaining under c (the speed of light in a vacuum).
Thank you.
I've actually used Bayesian perspectives (maximum entropy, etc) but I've never looked at it as a subjective degree of plausibility. Based on the Wikipedia article, I guess I haven't been looking at it the way others have. I understand where Eli is coming from in applying Information theory. He doesn't have complete information, so he won't say that he has probability 1. He could get another bit of information which changes his belief, but he thinks (based on prior observation) that is very low.
I guess, I have problem with him maybe overreachi...
It doesn't make sense to say that this subjective personal probability (which, by the way, he chose to calculate based on a tiny subset of the vast amounts of information he has in his mind) based on his observed evidence is somehow the absolute probability that, say, evolution is "true".
Where does he? I assume as a Bayesian he would deny the reality of any such "absolute probability".
Cumulant-nimbus,
There's no shortage of statisticians who would disagree with your assertion that the probability of a probability is superfluous. A good place to start is with de Finetti's theorem.
de Finetti assumes conditioning. If I am taking conditional expectations, then iterated expectations (with different conditionings) is very useful.
But iterated expectations, all with the same conditioning, is superfluous. That's why I took care not to put any conditioning into my expectations.
Or we can criticize the probability-of-a-probability musings another way as having undefined filtrations for each of the stated probabilities.
"But iterated expectations, all with the same conditioning, is superfluous. That's why I took care not to put any conditioning into my expectations."
Fair enough. My point is that the de Finetti theorem provides a way to think sensibly about having a probability of a probability, particularly in a Bayesian framework.
Let me give a toy example to demonstrate why the concept is not superfluous, as you assert. Compare two situations:
(a) I toss a coin that I know to be as symmetrical in construction as possible.
(b) A magician friend of mine, who I know...
I'm totally missing the "N independent statements" part of the discussion; that seems like a total non-sequitur to me. Can someone point me at some kind of explanation?
-Robin
Good point about infinite certainty, poor example.
Assert 99.9999999999% confidence, and you're taking it up to a trillion. Now you're going to talk for a hundred human lifetimes, and not be wrong even once?
Leaky induction. Didn't that feel a little forced?
evidence that convinced me that 2 + 2 = 4 in the first place.
"(the sum of) 2 + 2" means "4"; or to make it more obvious, "1 + 1" means "2". These aren't statements about the real world*, hence they're not subject to falsification, they contain no component ...
Assert a confidence of (1 - 1/googolplex) and your ego far exceeds that of mental patients who think they're God.
For the record, I assign a probability larger than 1/googleplex to the possibility that one of the mental patients actually is God.
I don't think you could get up to 99.99% confidence for assertions like "53 is a prime number". Yes, it seems likely, but by the time you tried to set up protocols that would let you assert 10,000 independent statements of this sort - that is, not just a set of statements about prime numbers, but a new protocol each time - you would fail more than once.
If you forced me to come up wit 10,000 statements I knew to >=99.99% I would find it easy, given sufficient time. Most of them would be probability much much more than 99.99% however.
Here ...
I'm really not sure what exactly you mean by "independent statements" in this post.
If you put a chair next to another chair, and you found that there were three chairs where before there was one, would it be more likely that 1 + 1 = 3 or that arithmetic is not the correct model to describe these chairs? A true mathematical proposition is a pure conduit between its premises and axioms and its conclusions.
But note that you can never be quite completely certain that you haven't made any mistakes. It is uncertain whether "S0 + S0 = SS0" is a true proposition of Peano arithmetic, because we may all coincidentally have gotten something hilariously wrong.
This is why, when an experiment does not go as predicted, the first recourse is to check that your math has been done correctly.
Eliezer, what could convince you that Baye's Theorem itself was wrong? Can you properly adjust your beliefs to account for evidence if that adjustment is systematically wrong?
"But once I assign a probability of 1 to a proposition, I can never undo it. No matter what I see or learn, I have to reject everything that disagrees with the axiom. "
I think this is what causes the religious argument paradox. On a deep down level, most of us realize this is true.
It's not at all hard for a mathematician to come up with arbitrarily large numbers of statements that have about the same confidence as 2+2=4. There's lots of ways. Perhaps the most obvious is "n+2 = (n+1)+1" for arbitrary large n a whole number. It's rather silly to talk about how many lifetimes it would take to say these statements because there they are in 2 seconds.
I suppose the anticipated response would be to question whether these are independent statements. Why would they not be? If we are anticipating that 2+2 may not be 4 I don't s...
I'm 99 percent sure that the statement "consciousness exists/is" has a PROBABILITY 1 at being true. All of the specificities we associate with it certainly do not, but that fact that something is experiencing something seems irrefutable. Can someone concoct a line of reasoning that would prove this wrong, say similar to 2 + 2 = 3
The Banach Tarski Paradox is a plausible way in which 1 = 2, and thus 3 = 2 + 2.
I agree that you can never be „infinitly certain“ about the way the physical world is (because there‘s always a very tiny possibility that things might suddenly change, or everything is just a simulation, or a dream, or […] ), but you should assign probability 1 to mathematical statements for which there isn‘t just evidence, but actual, solid proof.
Suppose you have the choice beetween the following options: A You get a lottery with a 1-Epsilon chance of winning. B You win if 2+2=4 and 53 is a prime number and Pi is an irrational number.
Is there any Ep
...The link to Peter de Blanc is dead, try https://web.archive.org/web/20160305092845/http://www.spaceandgames.com/?p=27
I just had a click moment, and click moment should be shared so here I go.
I was thinking - why shouldn't I be able to make 10,000 statements similar to and get them all right? 1,000,000 even? 1,000,000,000? Any arbitrary ?
All I have to do is come up with simple additions of different numbers, and since it's all math and they are all tautologies there is no reason why I can't get be right on all of them. Or is it?
So the obvious reason is that it takes time, and my life are limited. Once I'm dead, I can't make any mor...
I have suddenly become mildly interested in investigating an edge case of this argument. I am not coming at this from the perspective of defending the statement of infinite certainty, it is only useful in certain nonsense-arguments. I just found it kinda fun, and maybe an answer would improve my understanding of the rhythm behind this post.
So, let's suppose you have a statement so utterly trivial and containing so little practical sense you wouldn't even think of it as a worthwhile statement, for example "A is A". Now, this is a bad example, because you ca...
In “Absolute Authority,” I argued that you don’t need infinite certainty:
Concerning the proposition that 2 + 2 = 4, we must distinguish between the map and the territory. Given the seeming absolute stability and universality of physical laws, it’s possible that never, in the whole history of the universe, has any particle exceeded the local lightspeed limit. That is, the lightspeed limit may be not just true 99% of the time, or 99.9999% of the time, or (1 - 1/googolplex) of the time, but simply always and absolutely true.
But whether we can ever have absolute confidence in the lightspeed limit is a whole ’nother question. The map is not the territory.
It may be entirely and wholly true that a student plagiarized their assignment, but whether you have any knowledge of this fact at all—let alone absolute confidence in the belief—is a separate issue. If you flip a coin and then don’t look at it, it may be completely true that the coin is showing heads, and you may be completely unsure of whether the coin is showing heads or tails. A degree of uncertainty is not the same as a degree of truth or a frequency of occurrence.
The same holds for mathematical truths. It’s questionable whether the statement “2 + 2 = 4” or “In Peano arithmetic, SS0 + SS0 = SSSS0” can be said to be true in any purely abstract sense, apart from physical systems that seem to behave in ways similar to the Peano axioms. Having said this, I will charge right ahead and guess that, in whatever sense “2 + 2 = 4” is true at all, it is always and precisely true, not just roughly true (“2 + 2 actually equals 4.0000004”) or true 999,999,999,999 times out of 1,000,000,000,000.
I’m not totally sure what “true” should mean in this case, but I stand by my guess. The credibility of “2 + 2 = 4 is always true” far exceeds the credibility of any particular philosophical position on what “true,” “always,” or “is” means in the statement above.
This doesn’t mean, though, that I have absolute confidence that 2 + 2 = 4. See the previous discussion on how to convince me that 2 + 2 = 3, which could be done using much the same sort of evidence that convinced me that 2 + 2 = 4 in the first place. I could have hallucinated all that previous evidence, or I could be misremembering it. In the annals of neurology there are stranger brain dysfunctions than this.
So if we attach some probability to the statement “2 + 2 = 4,” then what should the probability be? What you seek to attain in a case like this is good calibration—statements to which you assign “99% probability” come true 99 times out of 100. This is actually a hell of a lot more difficult than you might think. Take a hundred people, and ask each of them to make ten statements of which they are “99% confident.” Of the 1,000 statements, do you think that around 10 will be wrong?
I am not going to discuss the actual experiments that have been done on calibration—you can find them in my book chapter on cognitive biases and global catastrophic risk1—because I’ve seen that when I blurt this out to people without proper preparation, they thereafter use it as a Fully General Counterargument, which somehow leaps to mind whenever they have to discount the confidence of someone whose opinion they dislike, and fails to be available when they consider their own opinions. So I try not to talk about the experiments on calibration except as part of a structured presentation of rationality that includes warnings against motivated skepticism.
But the observed calibration of human beings who say they are “99% confident” is not 99% accuracy.
Suppose you say that you’re 99.99% confident that 2 + 2 = 4. Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once. Maybe for 2 + 2 = 4 this extraordinary degree of confidence would be possible: “2 + 2 = 4” is extremely simple, and mathematical as well as empirical, and widely believed socially (not with passionate affirmation but just quietly taken for granted). So maybe you really could get up to 99.99% confidence on this one.
I don’t think you could get up to 99.99% confidence for assertions like “53 is a prime number.” Yes, it seems likely, but by the time you tried to set up protocols that would let you assert 10,000 independent statements of this sort—that is, not just a set of statements about prime numbers, but a new protocol each time—you would fail more than once.2
Yet the map is not the territory: If I say that I am 99% confident that 2 + 2 = 4, it doesn’t mean that I think “2 + 2 = 4” is true to within 99% precision, or that “2 + 2 = 4” is true 99 times out of 100. The proposition in which I repose my confidence is the proposition that “2 + 2 = 4 is always and exactly true,” not the proposition “2 + 2 = 4 is mostly and usually true.”
As for the notion that you could get up to 100% confidence in a mathematical proposition—well, really now! If you say 99.9999% confidence, you’re implying that you could make one million equally fraught statements, one after the other, and be wrong, on average, about once. That’s around a solid year’s worth of talking, if you can make one assertion every 20 seconds and you talk for 16 hours a day.
Assert 99.9999999999% confidence, and you’re taking it up to a trillion. Now you’re going to talk for a hundred human lifetimes, and not be wrong even once?
Assert a confidence of (1 - 1/googolplex) and your ego far exceeds that of mental patients who think they’re God.
And a googolplex is a lot smaller than even relatively small inconceivably huge numbers like 3 ↑↑↑ 3. But even a confidence of (1 - 1/3 ↑↑↑ 3) isn’t all that much closer to PROBABILITY 1 than being 90% sure of something.
If all else fails, the hypothetical Dark Lords of the Matrix, who are right now tampering with your brain’s credibility assessment of this very sentence, will bar the path and defend us from the scourge of infinite certainty.
Am I absolutely sure of that?
Why, of course not.
As Rafal Smigrodski once said:
1Eliezer Yudkowsky, “Cognitive Biases Potentially Affecting Judgment of Global Risks,” in Global Catastrophic Risks, ed. Nick Bostrom and Milan M. irkovi (New York: Oxford University Press, 2008), 91–119.
2Peter de Blanc has an amusing anecdote on this point: http://www.spaceandgames.com/?p=27. (I told him not to do it again.)