Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

0 And 1 Are Not Probabilities

29 Post author: Eliezer_Yudkowsky 10 January 2008 06:58AM

Followup toInfinite Certainty 

1, 2, and 3 are all integers, and so is -4.  If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers.  You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers.

Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers.  People sometimes say something like, "5 + infinity = infinity", because if you start at 5 and keep counting up without ever stopping, you'll get higher and higher numbers without limit.  But it doesn't follow from this that "infinity - infinity = 5".  You can't count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you're done.

From this we can see that infinity is not only not-an-integer, it doesn't even behave like an integer.  If you unwisely try to mix up infinities with integers, you'll need all sorts of special new inconsistent-seeming behaviors which you don't need for 1, 2, 3 and other actual integers.

Even though infinity isn't an integer, you don't have to worry about being left at a loss for numbers.  Although people have seen five sheep, millions of grains of sand, and septillions of atoms, no one has ever counted an infinity of anything. The same with continuous quantities—people have measured dust specks a millimeter across, animals a meter across, cities kilometers across, and galaxies thousands of lightyears across, but no one has ever measured anything an infinity across.  In the real world, you don't need a whole lot of infinity.

(I should note for the more sophisticated readers in the audience that they do not need to write me with elaborate explanations of, say, the difference between ordinal numbers and cardinal numbers.  Yes, I possess various advanced set-theoretic definitions of infinity, but I don't see a good use for them in probability theory.  See below.)

In the usual way of writing probabilities, probabilities are between 0 and 1.  A coin might have a probability of 0.5 of coming up tails, or the weatherman might assign probability 0.9 to rain tomorrow.

This isn't the only way of writing probabilities, though.  For example, you can transform probabilities into odds via the transformation O = (P / (1 - P)).  So a probability of 50% would go to odds of 0.5/0.5 or 1, usually written 1:1, while a probability of 0.9 would go to odds of 0.9/0.1 or 9, usually written 9:1.  To take odds back to probabilities you use P = (O / (1 + O)), and this is perfectly reversible, so the transformation is an isomorphism—a two-way reversible mapping.  Thus, probabilities and odds are isomorphic, and you can use one or the other according to convenience.

For example, it's more convenient to use odds when you're doing Bayesian updates.  Let's say that I roll a six-sided die:  If any face except 1 comes up, there's an 10% chance of hearing a bell, but if the face 1 comes up, there's a 20% chance of hearing the bell.  Now I roll the die, and hear a bell.  What are the odds that the face showing is 1?  Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20) and the likelihood ratio is 0.2:0.1 (corresponding to the real number 2) and I can just multiply these two together to get the posterior odds 2:5 (corresponding to the real number 2/5 or 0.40).  Then I convert back into a probability, if I like, and get (0.4 / 1.4) = 2/7 = ~29%.

So odds are more manageable for Bayesian updates—if you use probabilities, you've got to deploy Bayes's Theorem in its complicated version.  But probabilities are more convenient for answering questions like "If I roll a six-sided die, what's the chance of seeing a number from 1 to 4?"  You can add up the probabilities of 1/6 for each side and get 4/6, but you can't add up the odds ratios of 0.2 for each side and get an odds ratio of 0.8.

Why am I saying all this?  To show that "odd ratios" are just as legitimate a way of mapping uncertainties onto real numbers as "probabilities".  Odds ratios are more convenient for some operations, probabilities are more convenient for others.  A famous proof called Cox's Theorem (plus various extensions and refinements thereof) shows that all ways of representing uncertainties that obey some reasonable-sounding constraints, end up isomorphic to each other.

Why does it matter that odds ratios are just as legitimate as probabilities?  Probabilities as ordinarily written are between 0 and 1, and both 0 and 1 look like they ought to be readily reachable quantities—it's easy to see 1 zebra or 0 unicorns.  But when you transform probabilities onto odds ratios, 0 goes to 0, but 1 goes to positive infinity.  Now absolute truth doesn't look like it should be so easy to reach.

A representation that makes it even simpler to do Bayesian updates is the log odds—this is how E. T. Jaynes recommended thinking about probabilities.  For example, let's say that the prior probability of a proposition is 0.0001—this corresponds to a log odds of around -40 decibels.  Then you see evidence that seems 100 times more likely if the proposition is true than if it is false.  This is 20 decibels of evidence.  So the posterior odds are around -40 db + 20 db = -20 db, that is, the posterior probability is ~0.01.

When you transform probabilities to log odds, 0 goes onto negative infinity and 1 goes onto positive infinity.  Now both infinite certainty and infinite improbability seem a bit more out-of-reach. 

In probabilities, 0.9999 and 0.99999 seem to be only 0.00009 apart, so that 0.502 is much further away from 0.503 than 0.9999 is from 0.99999.  To get to probability 1 from probability 0.99999, it seems like you should need to travel a distance of merely 0.00001.

But when you transform to odds ratios, 0.502 and .503 go to 1.008 and 1.012, and 0.9999 and 0.99999 go to 9,999 and 99,999.  And when you transform to log odds, 0.502 and 0.503 go to 0.03 decibels and 0.05 decibels, but 0.9999 and 0.99999 go to 40 decibels and 50 decibels.

When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other.  That is, the log odds gives us a natural measure of spacing among degrees of confidence.

Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.

Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them—like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0.

So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.

The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.

However, in the real world, when you roll a die, it doesn't literally have infinite certainty of coming up some number between 1 and 6.  The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write "37" on one side.

If you made a magical symbol to stand for "all possibilities I haven't considered", then you could marginalize over the events including this magical symbol, and arrive at a magical symbol "T" that stands for infinite certainty.

But I would rather ask whether there's some way to derive a theorem without using magic symbols with special behaviors.  That would be more elegant.  Just as there are mathematicians who refuse to believe in double negation or infinite sets, I would like to be a probability theorist who doesn't believe in absolute certainty.

PS:  Here's Peter de Blanc's "mathematical certainty" anecdote.  (I told him not to do it again.)

 

Part of the Overly Convenient Excuses subsequence of How To Actually Change Your Mind

Next post: "Feeling Rational" (start of next subsequence)

Previous post: "Infinite Certainty"

Comments (85)

Sort By: Old
Comment author: Paul_Gowder 10 January 2008 07:39:57AM 1 point [-]

hmm... I feel even more confident about the existence of probability-zero statements than I feel about the existence of probability-1 statements. Because not only do we have logical contradictions, but we also have incoherent statements (like Husserl's "the green is either").

Can one form subjective probabilities over the truth of "the green is either" at all? I don't think so, but I remember a some-months-ago suggestion of Robin's about "impossible possible worlds," which might also imply the ability to form probability estimates over incoherencies. (Why not incoherent worlds? One might ask.) So the idea is at least potentially on the table.

And then it seems obvious that we will forever, across all space and time, have no evidence to support an incoherent proposition. That's as good an approximation of infinite lack of evidence as I can come up with. P("the green is either")=0?

Comment author: RobbBB 21 November 2012 06:58:52PM *  9 points [-]

If you assign 0 to logical contradictions, you should assign 1 to the negations of logical contradictions. (Particularly since your confidence in bivalence and the power of negation is what allowed you to doubt the truth of the contradiction in the first place.) So it's strange to say that you feel safer appealing to 0s than to 1s.

For my part, I have a hard time convincing myself that there's simply no (epistemic) chance that Graham Priest is right. On the other hand, assigning any value but 1 to the sentence "All bachelors are bachelors" just seems perverse. It seems as though I could only get that sentence wrong if I misunderstand it. But what am I assigning a probability to, if not the truth of the sentence as I understand it?

Another way of saying this is that I feel queasy assigning a nonzero probability to "Not all bachelors are bachelors," (i.e., ¬(p → p)) even though I think it probably makes some sense to entertain as a vanishingly small possibility "All bachelors are non-bachelors" (i.e., p → ¬p, all bachelors are contradictory objects).

Comment author: Unknown 10 January 2008 07:45:05AM 4 points [-]

One answer would be that an incoherent proposition is not a proposition, and so doesn't have any probability (not even zero, if zero is a probability.)

Another answer would be that there is some probability that you are wrong that the proposition is incoherent (you might be forgetting your knowledge of English), and therefore also some probability that "the green is either" is both coherent and true.

Comment author: j.edwards 10 January 2008 07:49:18AM 6 points [-]

It's difficult to assign probability to incoherent statements, because since we can't mean anything by them, we can't assert a referent to the statement -- in that sense, the probability is indeterminate (additionally, one could easily imagine a language in which a statement such as "the green is either" has a perfectly coherent meaning -- and we can't say that's not what we meant, since we didn't mean anything). Recall also that each probability zero statement implies a probability one statement by its denial and vice versa, so one is equally capable of imagining them, if in a contrived way.

Comment author: Baruta07 31 October 2012 04:51:00PM 0 points [-]

Putting this in a slightly more coherent way. (I was having some trouble understanding the explanation, so I broke it down into layman's terms, might make it more easily understandable)

If I assign P(0) to "Green is either" Then I assign P(1) to the statement "Green is not either"

If you assign absolute certainty to any one statement you are, by definition assigning absolute impossibility to all other possibilities.

Comment author: Baruta07 31 October 2012 04:51:08PM 0 points [-]

Putting this in a slightly more coherent way. (I was having some trouble understanding the explanation, so I broke it down into layman's terms, might make it more easily understandable)

If I assign P(0) to "Green is either" Then I assign P(1) to the statement "Green is not either"

If you assign absolute certainty to any one statement you are, by definition assigning absolute impossibility to all other possibilities.

Comment author: Paul_Gowder 10 January 2008 08:20:44AM 1 point [-]

j.edwards, I think your last sentence convinced me to withdraw the objection -- I can't very well assign a probability of 1 to ~"the green is either" can I? Good point, thanks.

Comment author: Paul_Crowley2 10 January 2008 09:19:22AM 2 points [-]

Probabilities of 0 and 1 are perhaps more like the perfectly massless, perfectly inelastic rods we learn about in high school physics - they are useful as part of an idealized model which is often sufficient to accurately predict real-world events, but we know that they are idealizations that will never be seen in real life.

However, I think we can assign the primeness of 7 a value of "so close to 1 that there's no point in worrying about it".

Comment author: Ben_Jones 10 January 2008 10:15:53AM 0 points [-]

In stark contrast to this time last week, I now internally believe the title of this post.

I did enjoy "something, somewhere, is having this thought," Paul, despite all its inherent messiness.

'Green is either' doesn't tell us much. As far as we know it's a nonsensical statement, but I think that makes it more believable than 'green is purple', which makes sense, but seems extremely wrong. You might as well try to assign a probability to 'flarg is nardle'. I can demonstrate that green isn't purple, but not that green isn't either, nor that flarg isn't nardle.

Is there anything truer than '7 is prime'? What's the truest statement anyone can come up with? Can we definitely get no closer to 0 than 1, based on J Edwards & Paul, above?

Comment author: randomwalker 10 January 2008 01:26:02PM 0 points [-]

I think you can still have probabilities sum to 1: probability 1 would be the theoretical limit of probability reaching infinite certitude. Just like you can integrate over the entire real line, i.e -∞ to ∞ even though those numbers don't actually exist.

Comment author: Caledonian2 10 January 2008 01:36:53PM 5 points [-]

i didn't get it.

Easy: it's a demonstration of how you can never be certain that you haven't made an error even on the things you're really sure about.

It's a cheap, dirty demonstration, but one nevertheless.

Comment author: cumulant-nimbus 10 January 2008 04:01:13PM 3 points [-]

You seem to think probabilities of 0 and 1 are mysterious or contradictory when discussing randomness; they aren't. When you're talking about randomness, you need to define your support. that mere action gives you places where the probability is zero. For example: Can the time to run 100m ever be negative? No? Then P(t<0) = 0. And by extension, P(T>=0) = 1.

No puzzle there. But you're transfrormation to log-odds has some regularity conditions you're violating in those cases: the transform is only defined for probabilities in (0,1). But that doesn't mean log-odds or probabilities are flawed. Probabilities or 0 and 1 -- like log-odds of plus-and-minus infinity -- are just filling in the boundaries on the system you've created. Mathematically, you want to be able to handle limits; that means handling limits as a probability approaches 0 or 1. That's it.

This shouldn't be some huge philosophical puzzle; it's merely the need to have any mathematical system you use be complete. Sir David Cox would be the first to tell you that.

Comment author: dhasenan 24 May 2013 06:22:25AM 0 points [-]

We certainly can talk about the limit of a function whose codomain is a measure of probability being 1; the limit of the probability of a proposition as the amount of evidence in favor of it approaches infinity is 1. But that doesn't mean that 1 is a measure of probability. Infinity is valid as the limit of a function yielding real numbers, but infinity is not a real number.

As for your example with the amount of time it takes to run a particular distance, I can't be certain that we won't find a region of space with strange temporal effects that allow you to take a walk and arrive at your starting point before you left. This would allow you to run a hundred meters in negative time, in at least one sense of the word. Getting that sort of speed from the runner's point of view would be stranger, but the Dark Lords of the Matrix could probably make it happen.

Comment author: Ben_Jones 10 January 2008 04:09:50PM 1 point [-]

Cumulant - can you state, with infinite certainty, that no-one will ever run faster than light?

Comment author: pnrjulius 27 May 2012 04:14:11AM 2 points [-]

Well, it does seem like someone who travels back in time to reach the finish before he got there has... not actually followed the rules of the 100-meter dash.

Comment author: Dan_Burfoot 10 January 2008 04:11:07PM 0 points [-]

Another way to think about probabilities of 0 and 1 is in terms of code length.

Shannon told us that if we know the probability distribution of a stream of symbols, then the optimal code length for a symbol X is: l(X) = -log p(X)

If you consider that an event has zero probability, then there's no point in assigning a code to it (codespace is a conserved quantity, so if you want to get short codes you can't waste space on events that never happen). But if you think the event has zero probability, and then it happens, you've got a problem - system crash or something.

Likewise, if you think an event has probability of one, there's no point in sending ANY bits. The receiver will also know that the event is certain, so he can just insert the symbol into the stream without being told anything (this could happen in a symbol stream where three As are always followed by a fourth). But again, if you think the event is certain and then it turns out not to be, you've got a problem: the receiver doesn't get the code you want to send.

If you refuse to assign zero or unity probabilities to events, then you have a strong guarantee that you will always be able to encode the symbols that actually appear. You might not get good code lengths, but you'll be able to send your message. So Eliezer's stance can be interpreted as an insistence on making sure there is a code for every symbol sequence, regardless of whether that sequence appears to be impossible.

Comment author: pnrjulius 27 May 2012 04:17:17AM -3 points [-]

But then, do you really want to build a binary transmitter that is prepared to handle not only sequences of 0 and 1, but also the occasional "zebrafish" and "Thursday" (imagine somehow fitting these into an electrical signal, or don't, because the whole point is that it can't be done)? Such a transmitter has enormously increased complexity to handle signals that, well... won't ever happen. I guess you could say the probability is low enough that the expected utility of dealing with it is not worth it. But what about the chance that a "zebrafish" in the launch codes will wipe out humanity? Surely that expected utility cannot be ignored? (Except it can!)

Comment author: Q 10 January 2008 04:43:39PM 1 point [-]

Brent,

From what I understood on reading the Wikipedia article on Bayesian probability and inferring from how he writes (and correct me if I'm wrong), Eliezer is talking about your "subjective probability." You are a being, have consciousness, and interpret input as information. Given a lot of this information, you've formed an idea that 7 is prime. You've also formed an idea that other people exist, and that the sky is blue, which also have a high subjective probability in your mind because you have a lot of direct information to sustain that belief.

Moreover, if you've ever been wrong before, hopefully you've noticed that you have been wrong before. That's a little information that "you are sometimes wrong about things that you are very sure of". So, you might apply this information to your formula of your probability of the idea that "7 is prime", so you still end up with a high probability, but not 1.

Now, you might not think that "you are sometimes wrong about things that you are sure of" about every single subject, such as primeness. But, what if you had the information that other humans, smart people, have at some point in the past, incorrectly understood the primeness of a number (the anecdote). You might state, that "human beings are sometimes wrong about the primeness of a number," and "I am a human being." Again, if you include that information in your calculation of the probability that the idea that "7 is prime" is true, then you end up with a high probability, but not 1.

(Oh, but what if you didn't make the statement "human beings are sometimes wrong about the primeness of a number", but instead, "this idiot is sometimes wrong about the primeness of a number, but I am never" Well, you can. That's one big problem with Bayesian subjective probabilities. How do we generalize? How can we formalize it so that two people with the same information deterministically get the same probability? Logical (or objective epistemic) probability attempts to answer these questions.)

So, you're right that it is just "a single person" getting it wrong, that his cerainty was incorrect. But that's Eliezer's point. We are not supreme beings lording over all reality, we are humans who have memorized some information from the past and made some generalizations, including generalizations that sometimes our generalizations are wrong.

Comment author: Janos2 10 January 2008 04:58:57PM 5 points [-]

I agree with cumulant. The mathematical subject of probability is based on measure theory, which loses a ton of convergence theorems if we exclude 0 and 1. We can agree that things that are not known a priori can't have probability 0 or 1, but I think we must also agree that "an impossible thing will happen soon" has probability 0, because it's a contradiction. An alternate universe in which the number 7 (in the same kind of number system as ours, etc.) is prime is damn-near inconceivable, but an alternate universe in which impossible things are possible is purely absurd.

If our mathematical reasoning is coherent enough for it to be meaningful to make probability assignments then certainly we are not so fundamentally flawed that what we consider tautologies could be false. If you are willing to accept that maybe 0 is 1, then you can't do any of your probability adjustments, or use Bayes' Theorem, or anything of the sort without having a (possibly unstated) caveat that probability theory might be complete nonsense. But what's the probability that probability theory is nonsense (i.e. false or inconsistent)? What does that even mean? We can only assign a probability if that makes sense, so conditioned on the sentence making sense, probability theory must be nonsense with probability 0, no? So averaged over all possible universes (those where probability theory makes sense, and those where it doesn't) the sentence "probability makes sense with probability 1" better approximates the truth value of probability making sense than "probability makes sense with probability p" for p<1, assuming the probability of probability making sense is >0. If it's not, it's still not worse, but what the hell are we even saying?

Comment author: Utilitarian2 10 January 2008 05:40:44PM 1 point [-]

Speaking of measure theory, what probability should we assign to a uniformly distributed random real number on the interval [0, 1] being rational? Something bigger than 0? Maybe in practice we would never hold a uniform distribution over [0, 1] but would assign greater probability to "special" numbers (like, say, 1/2). But regardless of our probability distribution, there will exist subsets of [0, 1] to which we must assign probability 0.

The only way I can see around this is to refuse to talk about infinite (or at least uncountable) sets. Are there others?

Comment author: Janos2 10 January 2008 06:49:09PM 1 point [-]

I suspect Eliezer would object to my post claiming that I'm confusing map and territory, but I don't think that's fair. If there's a map you're trying to use all over the place (and you do seem to), then I claim it makes no sense to put a little region on the map labelled "maybe this map doesn't make any sense at all". If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway. So is it really reasonable to claim that "the probability that probability makes sense is <1"?

Utilitarian:

Measure theory gives a clear answer to this: it's 0. Which is fine. For all x, the probability that your rv will take the value x is 0. Actually the probability that your rv is computable is also 0. (Computable numbers are the largest countable class I know of.) What's false is the tempting statement that probability 0 events are impossible. It's only the converse that's true: impossible events have probability 0. There's another tempting statement that's false, namely the statement that if S is an arbitrary collection of disjoint events, the probability of one of them happening is the sum of the probabilities of each one happening. Instead, this only holds for countable sets S. This is part of the definition of a measure.

Comment author: Eliezer_Yudkowsky 10 January 2008 07:16:49PM 7 points [-]

If there's a map you're trying to use all over the place (and you do seem to), then I claim it makes no sense to put a little region on the map labelled "maybe this map doesn't make any sense at all". If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway.

Janos, are you saying that it is in fact impossible that your map in fact doesn't make any sense? Because I do, indeed, have a little section of my map labelled "maybe this map doesn't make any sense at all", and every now and then, I think about it a little, because there are so many fundamental premises of which I am unsure even in their definitions. (E.g: "the universe exists", and "but why?") Just because this area of my map drops out of my everyday decision theory due to failure to generate coherent advice on preferences, does not mean it is absent from my map. "You must ignore" or rather "You should usually ignore" is decision theory, and probability theory should usually be firewalled off from preferences.

Computable numbers are the largest countable class I know of.

Either all countable sets are the same size anyway, or you can generate a larger set by saying "all computable reals plus the halting probability". How about computable with various oracles?

What's false is the tempting statement that probability 0 events are impossible. It's only the converse that's true: impossible events have probability 0.

If you cannot repose probability 1 in the statement "all events to which I assign probability 0 are impossible" you should apply a correction and stop reposing probability 0 to those events. Do you mean to say that all impossible events have probability 0, plus some more possible events also have probability 0? This makes no sense, especially as a justification for using "probability 0" in a meaningfully calibrated sense.

To use "probability 0" without a finite expectation of being infinitely surprised, you must repose probability 1 in the belief that you use "probability 0" only for actually impossible events; but not necessarily believe that you assign probability 0 to every impossible event (satisfying both conditions implies logical omniscience).

I should mention that I'm also an infinite set atheist.

Comment author: Nick_Tarleton 10 January 2008 07:19:04PM 1 point [-]

I can admit the possibility that probability doesn't work, but not have to do anything about it. If probability doesn't work and I can't make rational decisions, I can expect to be equally screwed no matter what I do, so it cancels out of the equation.

The definable real numbers are a countable superset of the computable ones, I think. (I haven't studied this formally or extensively.)

Comment author: Neel_Krishnaswami 10 January 2008 07:34:58PM 1 point [-]

If you don't want to assume the existence of certain propositions, you're asking for a probability theory corresponding to a co-intutionistic variant of minimal logic. (Cointuitionistic logic is the logic of affirmatively false propositions, and is sometimes called Popperian logic.) This is a logic with false, or, and (but not truth), and an operation called co-implication, which I will write a <-- b.

Take your event space L to be a distributive lattice (with ordering <), which does not necessarily have a top element, but does have dual relative pseudo-complements. Take < to be the ordering on the lattice. (a <-- b) if for all x in the lattice L,

for all x, b < (a or x) if and only if a <-- b < x

Now, we take a probability function to be a function from elements of L to the reals, satisfying the following axioms:

1. P(false) = 0 2. if A < B then P(A) <= P(B) 3. P(A or B) + P(A and B) = P(A) + P(B)

There you go. Probability theory without certainty.

This is not terribly satisfying, though, since Bayes's theorem stops working. It fails because conditional probabilities stop working -- they arise from a forced normalization that occurs when you try to construct a lattice homomorphism between an event space and a conditionalized event space.

That is, in ordinary probability theory (where L is a Boolean algebra, and P(true) = 1), you can define a conditionalization space L|A as follows:

L|A = { X in L | X < A } true' = A false' = false and' = and or' = or not'(X) = not(X) and A P'(X) = P(X)/P(A)

with a lattice homomorphism X|A = X and A

Then, the probability of a conditionalized event P'(X|A) = P(X and A)/P(A), which is just what we're used to. Note that the definition of P' is forced by the fact that L|A must be a probability space. In the non-certain variant, there's no unique definition of P', so conditional probabilities are not well-defined.

To regain something like this for cointuitionistic logic, we can switch to tracking degrees of disbelief, rather than degrees of belief. Say that:

1. D(false) = 1 2. for all A, D(A) > 0 3. if A < B then D(A) >= D(B) 4. D(A or B) + D(A and B) = D(A) + D(B)

This will give you the bounds you need to let you need to nail down a conditional disbelief function. I'll leave that as an exercise for the reader.

Comment author: Ben_Jones 10 January 2008 07:48:50PM 2 points [-]

If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway.

Just cos it's not a very nice place to visit, doesn't mean it ain't on the map. ;)

Comment author: Psy-Kosh 10 January 2008 07:53:48PM 0 points [-]

"1, 2, and 3 are all integers, and so is -4. If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers. You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers."

This bothered me, more to the point, it hit on some stuff I've been thinking about. I realize I don't have a very good way to precisely state what I mean by "finite" or "eventually"

The above, for instance, basically says "if infinity is not an integer, then if I start at an integer and move an integer number of steps away from it, I will still be at an integer that's not infinity, therefore infinity isn't an integer"

But if we allowed infinity to be considered an integer, then we allow an infinite number of steps...

How about this: if N is a non infinite integer, SN is N's successor, PN is N's predecessor, neither SN nor PN will be infinite. Great, no matter where we start from, we can't reach an infinity in one step, so that seems to make this notion more solid.

but... if N is an infinity, then neither SN nor PN (thinking about ordinals now, btw, instead of cardinals) will be finite. Doh.

So the situation seems a bit symmetric here. This is really annoying to me.

I have as of late been getting the notion that the notions of "finite" and "eventually" are so tied to the idea of mathematical induction that it's probably best do define the former in terms of the latter... ie, the number of steps from A to A is finite if and only if induction arguments starting from A and going in the direction toward B actually validly prove the relevant proposition for B.

This is a vague notion, but near as I can tell, it comes closes to what I actually think I mean when I say something like "finite" or "eventually reach in a finite number of steps" or something like that.

ie, finite values are exactly those critters for which mathematical induction arguments can be used on. (maybe this is a bad definition. I'm more stating it as a "here's my suspicion of what may be the best basis to really represent the concept")

Anyways, as far as 0,1 not being probabilities... While I agree that one should't believe a proposition with probability 0 or 1, I'm not sure I'd consider them nonprobabilities. Perhaps "unreachable" probabilities instead. Disallowing stuff like sum to 1 normalizations and so on would seem to require "unnatural" hoops to jump through to get around that.

Unless, of course, someone has come up with a clean model without that. (If so, well, I'm curious too.)

Comment author: Janos2 10 January 2008 07:54:23PM 2 points [-]

Eliezer:

I'm not sure what an "infinite set atheist" is, but it seems from your post that you use different notions of probability than what I think of as standard modern measure theory, which surprises me. Utilitarian's example of a uniform r.v. on [0, 1] is perfect: it must take some value in [0, 1], but for all x it takes value x with probability 0. Clearly you can't say that for all x it's impossible for the r.v. to take value x, because it must in fact take one of those values. But the probabilities are still 0. Pragmatically the way this comes out is that "probability 0" doesn't imply impossible. If you perform an experiment countably-infinitely many times with the probability of a certain outcome being 0 each time, the probability of ever getting that outcome is 0; in this sense you can say the outcome is almost impossible. However it's possible that each outcome individually is almost impossible, even though of course the experiment will have an outcome.

You can object that such experiments are physically impossible e.g. because you can only actually measure/observe countably many outcomes. That's fine; that just means you can get by with only discrete measures. But such assumptions about the real world are not known a priori; I like usual measure theory better, and it seems to do quite a good job of encompassing what I would want to mean by "probability", certainly including the discrete probability spaces in which "probability 0" can safely be interpreted to mean "impossible".

You're right, it's not that hard to come up with larger countable classes of reals than the computables; I just meant that all of the usual, "rolls-off-the-tip-of-your-tongue" classes seem to be subsets of the computables. But maybe Nick is right, and the definables are broader. I haven't studied this either.

And yes, I also sometimes think about how assumptions I make about life and the perceptible universe could be wrong, but I do not do this much for mathematics that I've studied deeply enough, because I'm almost as convinced of its "truth" as I am of my own ability to reason, and I don't see the use in reasoning about what to do if I can't reason. This is doubly true if the statements I'm contemplating are nonsense unless the math works.

Comment author: michael_vassar3 10 January 2008 08:21:17PM 0 points [-]

Eliezer:

I am curious as to why you asked Peter not to repeat his stunt.

Also, I would really like to know how confident you are in your infinite set atheism and for that matter in your non-standard philosophy of mathematics attitudes in general.

Comment author: Doug_S. 10 January 2008 10:22:55PM 0 points [-]

Regarding infinite set atheism:

Is the set of "possible landing sites of a struck golf ball" finite or infinite?

In other words, can you finitely parameterize locations in space? Physicists normally model "position" as n-tuples of real numbers in a coordinate system; if they were forced to model position discretely, what would happen?

I can claim to see an infinite set each time I use a ruler...

Comment author: TGGP4 10 January 2008 10:38:00PM 0 points [-]

Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.

Comment author: komponisto2 10 January 2008 11:14:12PM 1 point [-]

Eliezer:

I should mention that I'm also an infinite set atheist.

You've mentioned this before, and I have always wondered: what does this mean? Does it mean that you don't believe there are any infinite sets? If so, then you have to believe that a mathematician who claims the contrary (and gives the standard proof) is making a mistake somewhere. What is it?

Frankly, even if you actually are a finitist (which I find hard to imagine), it doesn't seem relevant to this disucssion: every argument you have presented could equally well have been given by someone who accepts standard mathematics, including the existence of infinite sets.

Comment author: tcpkac 10 January 2008 11:17:17PM 0 points [-]

The nature of 0 & 1 as limit cases seem to be fascinating for the theorists. However, in terms of 'Overcoming Bias', shouldn't we be looking at more mundane conceptions of probability ? EY's posts have drawn attention to the idea that the amount of information needed to add additional cetainty on a proposition increases exponentially while the probability increases linearly. This says that in utilitarian terms, not many situations will warrant chasing the additional information above 99.9% certainty (outside technical implementations in nuclear physics, rocket science or whatever). 99.9% as a number is taken out of a hat. In human terms, when we say 'I'm 99.9% sure that 2+2 always =4', where not talking about 1000 equivalent statements. We're talking about one statement, with a spatial representation of what '100% sure' means with respect to that statement, and 0.1% of that spatial representation allowed for 'niggling doubts', of the sort : what have I forgotten ? What don't I know ? What is inconceivable for me ? The interesting question for 'overcoming bias' is : how do we make that tradeoff between seeking additional information on the one hand and accepting a limited degree of certainty on the other ? As an example (cf. the Evil Lords of the Matrix), considering whether our minds are being controlled by magic mushrooms from Alpha Pictoris may someday increase the 'niggling doubt' range from 0.1% to 5%, but the evidence would have to be shoved in our faces pretty hard first.

Comment author: Rolf_Nelson2 11 January 2008 12:45:10AM 0 points [-]

Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.

Not in standard quantum mechanics. Certain of the many theories unsupported hypotheses of quantum gravity (such as Loop Quantum Gravity) might say something similar to this, but that doesn't abolish every infinite set in the framework. The total number of "places where infinity can happen" in modern models has tended to increase, rather than decrease, over the centuries, as models have gotten more complex. One can never prove that nature isn't "allergic to infinities" (the skeptic can always claim, "wait, but if we looked even closer or farther, maybe we would see a heretofore unobserved brick wall"), but this allergy is not something that has been empirically observed.

Comment author: Doug_S. 11 January 2008 01:56:37AM 3 points [-]

I think Eliezer's "infinite set atheism" is a belief that infinite sets, although well-defined mathematically, do not exist in the "real world"; in other words, that any physical phenomenon that actually occurs can be described using a finite number of bits. (This can include numbers with infinite decimal expansions, as long as they can be generated by a finitely long computer program. Therefore, using pi in equations is not prohibited, because you're using the symbol "pi" to represent the program, which is finite.)

A consequence of "infinite set atheism" seems to be that the universe is a finite state machine (although one that is not necessarily deterministic). Am I understanding this properly?

Comment author: cumulant-nimbus 11 January 2008 02:24:12AM 0 points [-]

What do you mean by "infinite set atheism"? You are essentially stating that you don't believe in mathematical limits -- because that is one of the major consequences of infinite sets (or sequences).

If you don't believe in those... well, you lose calculus, you lose the density of real numbers, you lose the need or understanding of man events with probability 0 or 1, and you lose the point of Zeno's Paradox. -- Janos is spot on about measure zero not implying impossibility. What is the probability of a golf ball landing at any exact point? Zero. But it has to land somewhere, so no one point is impossible.

Impossibility would mean absence from your sigma algebra. What's that you ask? Without making this painful, you need three things for probability: an idea of what constitutes "the space of everything", an idea of what constitutes possible events out of that space which we can confirm or deny, and an assignment of numbers to those events. (This is often LaTeX'ed as (\Omega, \mathcal{F}, P).) The conversation here seems to be confusing the filtration/sigma-algebra F with the numbers assigned to those events by P.

Can we choose which we're talking about: events or numbers?

Comment author: Caledonian2 11 January 2008 02:58:43AM 2 points [-]

What is the probability of a golf ball landing at any exact point? Zero.

Wrong.

I don't know which is more painful: Eliezer's errors, or those of his detractors.

Comment author: Matthew2 11 January 2008 03:28:31AM 0 points [-]

Perhaps you could clarify what exactly is an infinite set atheist in a full post...or maybe it's only worth a comment.

Comment author: Z._M._Davis 11 January 2008 03:39:27AM 0 points [-]

Cumulant, I think the idea behind "infinite set atheism" is not that limits don't exist, but that that infinities are acceptable only as limits approached in a specified way. On this view, limits are not a consequence of infinite sets, as you contend; rather, only the limit exists, and the infinite set or sequence is merely a sloppy way of thinking about the limit.

Eliezer, I'll second Matthew's suggestion above that you write a post on infinite set atheism; it looks as if we don't understand you.

I think I understand the motive for rejecting infinite sets (viz., that whenever you deal with infinites you get all sorts of ridiculously counterintuitive results--sums coming out different when you re채rrange the terms, the Banach-Tarski paradox, &c., &c.), but I'm not sure you can give up infinite sets without also giving up the real numbers (as others have touched on above), which seems very wrong.

Comment author: cumulant-nimbus 11 January 2008 03:40:32AM 1 point [-]

Caledonian: Not wrong. Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals.

Now say there is some probability density of landing spots; and, let's say no one spot is special in that it attracts golf balls more than points immediately nearby (i.e. our pdf is continuous and non-atomic). Right there, you need every point (as a singleton) to have measure 0.

Go pick up Billingsley: measure 0 is not the same as impossible nor does it cause any problems.

Comment author: Caledonian2 11 January 2008 04:14:12AM -1 points [-]

Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals.

And the location that the ball lands on will also be composed of infinitely many reals. Shall we compare the size of two infinite sets?

Comment author: cumulant-nimbus 11 January 2008 05:09:32AM 0 points [-]

I'd say that the ball is a sphere and consider the first point of impact (i.e. the tangency point of the plane to the sphere). Otherwise, you need to know a lot about the ball and the field where it lands.

You can compare infinite sets. Take the sets A and B, A={1,2,3,...} and B={2,3,4,...}. B is, by construction, a subset of A. There's your comparison; yet, both are infinite sets.

What assumptions would you make for the golf ball and the field? (To keep things clear, can we define events and probabilities separately?)

Comment author: Paul_Gowder 11 January 2008 07:27:53AM 0 points [-]

Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero. Probabilities in continuous space are measured on intervals. Basic calculus...

Comment author: Ben_Jones 11 January 2008 10:02:32AM -1 points [-]

I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it.

This is what I'm given to understand as well. Doesn't this take the teeth out of Zeno's paradox?

Comment author: Ben_Jones 11 January 2008 10:22:47AM 0 points [-]

Pragmatically the way this comes out is that "probability 0" doesn't imply impossible.

Janos, would you agree that P=0 is a probability to the same degree that infinity is a number? Apologies for double post.

Comment author: Caledonian2 11 January 2008 02:40:58PM 0 points [-]

Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero.

Gowder, everyone who's ever given the issue more than three-seconds'-thought knows that no statistical result ever involves a single point.

Comment author: J_Thomas 11 January 2008 03:05:30PM 2 points [-]

Usually, if a die lands on edge we say it was a spoiled throw and do it over. Similarly if a Dark Lord writes 37 on the face that lands on top, we complain that the Dark Lord is spoiling our game and we don't count it.

We count 6 possibilities for a 6-sided die, 5 possibilities for a 5-sided die, 2 possibilities for a 2-sided die, and if you have a die with just one face -- a spherical die -- what's the chance that face will come up?

I think it would be interesting to develop probability theory with no boundaries, with no 0 and 1. It works fine to do it the way it's done now, and the alternative might turn up something interesting too.

Comment author: Janos2 11 January 2008 08:10:02PM 0 points [-]

Ben:

Well, that depends on your number system. For some purposes +infinity is a very useful value to have. For instance if you consider the extended nonnegative reals (i.e. including +infinity) then every measurable nonnegative extended-real-valued function on a measure space actually has a well-defined extended-nonnegative-real-values integral. There are all kinds of mathematical structures where an infinity element (or many) is indispensable. It's a matter of context. The question of what is a "number" is I think very vague given how many interesting number-like notions mathematicians have come up with. But unquestionably "infinity" is not a natural number, or a real number, or a complex number.

Probability theory, on the other hand, would have to change shape if we comfortably wanted to exclude 0 probabilities. What we now call measures would be wrong for the job. I don't know how it would look, but I find the standard description intuitively appealing enough that I don't think it should be changed. It's probably true that for a Bayesian inference engine of some sort, whose purpose is to find likelihoods of propositions given evidence, the "probabilities" it keeps track of shouldn't become 0 or 1. If there's a rich theory there focussing on how to practically do this stuff (and I bet there is, although I know nothing of it beyond Bayes' Theorem, which is a simple result) then ignoring the possibility of 0s and 1s makes sense there: for example you can use the log odds. But in general probability theory? No.

Comment author: billswift 12 January 2008 04:31:54PM 0 points [-]

I think it would be interesting to develop probability theory with no boundaries, with no 0 and 1. It works fine to do it the way it's done now, and the alternative might turn up something interesting too.

You might want to check out Kosko's Fuzzy Thinking. I haven't gone any further into fuzzy logic, yet, but that sounds like something he discussed. Also, he claimed probability was a subset of fuzzy logic. I intend to follow that up, but there is only one of me, and I found out a long time ago that they can write it faster than I can read it.

Comment author: Curious_Green_Dreams 12 January 2008 07:53:43PM 4 points [-]

"On some golf courses, the fairway is readily accessible, and the sand traps are not. The green is either."

Comment author: Paul_Gowder 12 January 2008 08:48:21PM 1 point [-]

Haha, very nice CGD. Shows how much those philosophers of language know about golf. :-)

Although... hmm... interesting. I think that gives us a way to think about another probability 1 statement: statements that occupy the entire logical space. Example: "either there are probability 1 statements, or there are not probability 1 statements." That statement seems to be true with probability 1...

Comment author: Yaroslav_Bulatov 05 March 2008 01:00:48AM 1 point [-]

Disallowing a symbol for "all events" breaks the definition of a probability space. It's probably easier to allow extended reals and break some field axioms than figure out do rigorous probability without a sigma-algebra.

Comment author: LeBleu 30 May 2008 07:47:03PM 0 points [-]

When re-working this into a book, you need to double check your conversions of log odds into decibels. By definition, decibels are calculated using log base 10, but some of your odds are natural logarithms, which confused the heck out of me when reading those paragraphs.

Probability .0001 = -40 decibels (This is the only correct one in this post, all "decibel" figures afterwards are listed as 10 * the natural logarithm of the odds.) Probability 0.502 = 0.035 decibels Probability 0.503 = 0.052 decibels Probability 0.9999 = 40 decibels Probability 0.99999 = 50 decibels

P.S. It'd be nice if you provided an RSS feed for the comments on a post, in addition to the RSS feed for the posts...

Comment author: Eliezer_Yudkowsky 30 May 2008 10:26:44PM 0 points [-]

I cannot begin to imagine where those numbers came from. Dangers of "Posted at 1:58 am", I guess. Fixed.

Comment author: CuriousAlbert 14 September 2009 08:48:02PM 0 points [-]

Could you respond to Neel Krishnaswami's post above, and this one as well?

Comment author: thomblake 04 May 2010 07:11:21PM 1 point [-]

P(A&B)+P(A&~B)+P(~A&B)+P(~A&~B)=1

Isn't the "1" above a probability?

Comment author: TobyBartels 28 July 2010 04:13:51AM 3 points [-]

My intution as a mathematician declares that nobody will never develop an elegant mathematical formulation of probability theory that does not allow for statements that are logically impossible or certain, such as statements of the form p AND NOT p. And it is necessary, if the theory is to be isomorphic to the usual one, that these statements have probability 0 (if impossible) or 1 (if certain). However, I believe that it is quite reasonable to declare, as a condition demanded of any prior deemed rational, that only truly impossible or certain statements have those probabilities. I think that this gives you what you want.

It's obvious that you can make this very demand when working with discrete probability distributions. It may not be obvious that you can make this demand when working with continuous probability distributions. Certainly the usual theory of these, based on so-called ‘measure spaces’ and ‘σ-algebras’ (I mention those in case they jog the reader's memory), cannot tolerate this requirement, at least not if anything at all similar to the usual examples of continuous distributions are allowed.

One answer is that only discrete probability distributions apply to the real world, in which one can never make measurements with infinite precision or observe an infinite sequence of events. Even if the world has infinite size or is continuous to infinitesimal scales, you will never observe that, so you don't need to predict anything about that.

However, even if you don't buy this argument, never fear! There is a mathematical theory of probability based on ‘pointless measure spaces’ and ‘abstract σ-algebras’. In this theory, it again makes perfect sense to demand that any prior must assign probability 0 or 1 only to impossible or certain events. The idea is that if something can never be observed, even in principle, then it is effectively impossible, and the abstract pointless theory allows one to treat it as such.

Then I agree that one should require, as a condition on considering a prior to be rational, that it should assign probability 0 only to these impossible events and assign probability 1 only to their certain complements.

Comment author: TobyBartels 28 July 2010 04:47:40AM *  0 points [-]

PS: cumulant-nimbus above gives a brief summary of the usual approach to measure theory. The pointless approach that I advocate can be suggested from that as follows: taboo \Omega. Neel Krishnamurti's comment is implicitly using the pointless approach; his event space is cumulant-nimbus's \mathcal{F}, and he works entirely in terms of events.

Comment author: timtyler 29 August 2010 07:40:38PM *  2 points [-]

As Perplexed points out this is usually known as Cromwell's_rule.

Comment author: MathijsJ 02 December 2010 02:24:59AM 3 points [-]

I'm kinda surprised that it's only been mentioned once in the comments (I only just discovered this site, really really great, by the way) and one from 2010 at that, but it seems to me that "a magical symbol to stand for "all possibilities I haven't considered" " does exist: the symbol "~" (i.e. not). Even the commenter who does mention it makes things complicated for himself: P(Q or ~Q)=1 is the simplest example of a proposition with probability 1.

The proposition is of course a tautology. I do think (but I'm not sure) that that is the only sort of statement that receives probability 1. This is in sync with Eliezer's "amount of evidence" interpretation. A bayesian update can only generate 1 if the initial proposition was of probability 1 or if the evidence was tautological (i.e. if Q then Q or, slightly less lame, if "Q or R" and "~R" then Q, where "Q or R" and "~R" are the evidence).

Skimming the comments, I saw two other proposals for "sure bets", the runner who clocked a negative time and the golf ball landing in a particular spot. That last one degenerated pretty quickly into a discussion about how many points there are in a field and on a ball. I think that's typical of such arguments: it depends on your model. Once you have your model specified the probability becomes 1 (or not) if the statement is (or isn't) tautological in the model. If the model isn't specified, then neither is the statement (what is a precise point?) and hence the probability. Ask the next man what the probability is of a runner clocking a negative time and he'll rightly respond: "Huh?" (unless he is a particularly obfuscatory know-it-all, in which case he might start blabbering about the speed of light. But then too, he makes a claim because he can ascribe meaning to the question, that is, he picks his model). So these are also tautological examples.

I think Eliezer's hold up pretty well for proposition that aren't tautological and hence empirical in nature: they require evidence and only tautological evidence will suffice for certainty.

About the problem of inserting 0's in certain standard theorems: I don't see a problem with Bayes' theorem (I'm curious about other examples). Dividing by 0 is not defined, so the probability of it raining when hell freezes over is not defined. That seems like a satisfactory arrangement.

Comment author: player_03 07 July 2011 07:52:50AM *  0 points [-]

Thanks for the analysis, MathijsJ! It made perfect sense and resolved most of my objections to the article.

I was willing to accept that we cannot reach absolute certainty by accumulating evidence, but I also came up with multiple logical statements that undeniably seemed to have probability 1. Reading your post, I realized that my examples were all tautologies, and that your suggestion to allow certainty only for tautologies resolved the discrepancy.

The Wikipedia article timtyler linked to seems to support this: "Cromwell's rule [...] states that one should avoid using prior probabilities of 0 or 1, except when applied to statements that are logically true or false." This matches your analysis - you can only be certain of tautologies.

Also, your discussion of models neatly resolves the distinction between, say, a mathematically-defined die (which can be certain to end up showing an integer between 1 and 6) and a real-world die (which cannot quite be known for sure to have exactly six stable states).


Eliezer makes his position pretty clear: "So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers."

It's true - you cannot ever reach a probability of 1 if you start at 0.5 and accumulate evidence, just as you cannot reach infinity if you start at 0 and add integer values. And the inverse is true, too - you cannot accumulate evidence against a tautology and bring its probability down to anything less than 1. But this doesn't mean a probability of 1 is an incoherent concept or anything.

Eliezer: if you're going to say that 0 and 1 are not probabilities, you need to come up with a new term for them. They haven't gone away completely just because we can't reach them.

Edit a year and a half later: I agree with the article as written, partially as a result of reading How to Convince Me That 2 + 2 = 3, and partially as a result of concluding that "tautologies that have probability 1 but no bearing on reality" is a useless concept, and that therefore, "probability 1" is a useless concept.

Comment author: fubarobfusco 29 July 2011 09:28:29AM -1 points [-]

Jaynes avoids P(A|B) for "probability of A given evidence B" and P(B) for "probability of B", preferring P(A|BX) and P(B|X) where X is one's background knowledge. This and the above leads naturally to the question of ~X: the situation in which one's "background knowledge" is false.

Assume that background knowledge X is the conjunction of a finite number of propositions. ~X is true if any of these propositions is false. If we can factor X into YZ where Y is the portion we suspect of being false — that is, if we can isolate for testing a portion of those beliefs we previously treated as "background knowledge" — then we can ask about P(A|BYZ) and P(A|B·~Y·Z).

Comment author: ksvanhorn 21 January 2011 04:39:28AM *  3 points [-]

For any state of information X, we have P(A or not A | X) = 1 and P(A and not A | X) = 0. We have to have 0 and 1 as probabilities for probability theory even to work. I think you're taking a reasonable idea -- that P(A | X) should be neither 0 nor 1 when A is a statement about the concrete physical world -- and trying to apply it beyond its applicable domain.

Comment author: AnthonyC 29 March 2011 06:22:18PM 0 points [-]

Consider the set of all possible hypotheses. This is a countable set, assuming I express hypotheses in natural language. It is potentially infinite as well, though in practice a finite mind cannot accomodate infintely-long hypotheses. To each hypothesis, I can try to assign a probability, on the basis of available evidence. These probabilities will be between zero and one. What is the probability that a rational mind will assign at least one hypothesis the status of absolute certainty? Either this is one (there is definitely such a hypothesis), or zero (there is definitely not such a hypothesis, which cannot be, because the hypothesis "there is definitely not such a hypothesis" is then a counterexample), or somewhere in between (there may be, somewhere, a hypothesis that a rational mind would regard as being absolutely certain). So I cannot accept your hypothesis that there does not exist, anywhere, ever, a hypothesis that I should regard as being absolutely certain.

Comment author: jimrandomh 29 March 2011 07:01:57PM 1 point [-]

Self-referential hypotheses do not always map to truth values, and "a rational mind will assign at least one hypothesis the status of absolute certainty" is self-referential. The contradiction you've encountered arises from using a statement isomorphic to "this statement is false" and requiring it to have a truth value, not to a problem with excluding 0 and 1 as probabilities.

Comment author: BlindDancer 03 April 2011 02:08:26PM 0 points [-]

Yes 0 and 1 are not probabilities. They're truth or falseness values. it's necessary to make a third 'truth value' for things that are unprovable, and possibly a fourth for things that are untestable.

Comment author: Sarokrae 18 October 2011 11:50:36AM 2 points [-]

Digging up an old thread here, but an interesting point I want to bring up: a friend of mine claims that he internally assigns probability 1 (i.e. an undisprovable belief) only to one statement: that the universe is coherent. Because if not, then mnergarblewtf. Is it reasonable to say that even though no statement can actually have probability 1 if you're a true Bayesian, it's reasonable to internally establish an axiom which, if negated, would just make the universe completely stupid and not worth living in any more?

Comment author: grouchymusicologist 18 October 2011 01:35:36PM 0 points [-]

No, it's not. It's the same fundamental mistake that a lot of religious rhetoric about "faith" and "meaning" is founded on: that wanting something to be true counts as evidence that it is true. There's no reason to think that the universe depends for any of its properties on whether someone finds it stupid or not, or worth living in.

I'd also suggest you try to draw your friend out a bit on what it means exactly for the universe to be "coherent." Can that notion be expressed formally? What would we expect to see if we lived in an incoherent universe?

Obviously, I'm dubious that the "coherence" of the universe is in any proper sense a philosophical or scientific idea -- it sounds a lot more like an aesthetic one.

Comment author: Sarokrae 18 October 2011 03:30:34PM *  0 points [-]

I think he just means "coherent" as "one which we can actually model based on our observations", i.e. one in which this whole exercise (rationality) makes any sense.

He expects that the universe be incoherent with probability zero, and doesn't think there would be any sensible observations if this were the case (or any observation being possible if this were the case).

ETA: Merriam-Webster Definition of COHERENT

1 a : logically or aesthetically ordered or integrated : consistent <coherent style> <a coherent argument> b : having clarity or intelligibility : understandable <a coherent person> <a coherent passage>

So, understandable and consistent: a universe which philosophy, mathematics and science can apply to in any meaningful way.

Comment author: Alejandro1 18 October 2011 03:41:19PM 0 points [-]

A charitable paraphrase of "The universe is coherent" could be a statement of the universal validity of non-contradiction: For every p, not (p and not p). However, given the existence of paraconsistent logic and philosophers who take dialethism seriously, I cannot assign probability 1 to the claim that no aspect of the universe requires a contradiction in its description.

I would go even further to say that I am quite more certain of many other claims (such as "1+1=2" and "2+2=4") than of such general and abstract propositions as "the universe is coherent" or even "there are no true contradictions".

Comment author: Sarokrae 18 October 2011 03:51:08PM *  0 points [-]

I don't think he goes quite that far - he assigns no statements probability 0 or 1 within our own logic system, even (P and ¬P), because he believes it to be possible (though not very likely) that some other logic system might supersede our own.

His belief is that it is not possible for ALL systems of logic to be incorrect, i.e. that (it is impossible to reason correctly about the universe) is necessarily false.

Comment author: nshepperd 18 October 2011 04:52:06PM 1 point [-]

There's a lot of logic to that. For extremely unlikely possibilities you can often get away with setting their probability to 0 to make the calculations a lot simpler. For possibilities where predicted utility is independent of your actions (like "reality is just completely random") it can also be worthwhile setting their probability to 0 (ie. ignoring them), since they're approximately a constant term in expected utility. These are good ways of approximating actual expected utility so you can still mostly make the right decisions, which bounded rationality requires.

Comment author: RichardKennaway 18 October 2011 06:19:54PM 0 points [-]

What is P(A|A)?

Comment author: Sarokrae 18 October 2011 10:02:23PM 0 points [-]

What do you mean by "|A"? It's well-defined in mathematics, sure, but in real life, surely the furthest you can go is "|experience/perception of evidence for A".

Also, there's also the probability that the particular version of logic you're using is wrong.

Comment author: [deleted] 18 October 2011 10:40:59PM 0 points [-]

What do you mean by "|A"? It's well-defined in mathematics, sure, but in real life, surely the furthest you can go is "|experience/perception of evidence for A".

How far you can go depends on what you mean by "go".

It's perfectly possible to calculate, say, P(I see the coin come up heads | the coin is flipped once, it is fair, and I see the outcome), and actually much more difficult to calculate P(I see the coin come up heads | I have experience/perception of evidence for the facts that the coin is flipped once, it is fair, and I see the outcome).

Comment author: Sarokrae 19 October 2011 07:36:30AM *  0 points [-]

"I see" is what I meant by perception/experience of evidence. Whenever I "see" something, there's always a non-zero chance of my brain deceiving me. The only thing you can really have to base your decisions on is P(I see the coin come up heads | I see/know the coin is flipped once, I know it is fair, and I see the outcome). P(the coin comes up heads|the coin is flipped once, it is fair and I know the outcome) is possible and easy to calculate, but not completely accurate to the world we live in.

Comment author: RichardKennaway 19 October 2011 08:37:19AM *  2 points [-]

The ("Bayesian") framework explored in these essays replaces the two Cartesian options, affirmation and denial, by a continuum of judgmental probabilities in the interval from 0 to 1, endpoints included, or -- what comes to the same thing -- a continuum of judgmental odds in the interval from 0 to infinity, endpoints included. Zero and 1 are probabilities no less than 1/2 and 99/100 are. Probability 1 corresponds to infinite odds, 1:0. That's a reason for thinking in terms of odds: to remember how momentous it may be to assign probability 1 to a hypothesis."

Richard Jeffrey, "Probability and the art of judgement".

I leave it as an exercise to correctly state the relationships between Eliezer's article, the Jeffrey quote, and the value of P(A|A).

(Note: Jeffrey is not to be confused with Jeffreys, although both were Bayesian probability theorists.)

Comment author: timtyler 22 November 2011 09:32:08PM *  1 point [-]

Interesting Log-Odds paper by Brian Lee and Jacob Sanders, November 2011.

Comment author: zslastman 29 June 2012 06:13:13PM 2 points [-]

"When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence."

That observation is so useful and intuition friendly it probably deserves it's own blog post, and a prominent place in your book.

Comment author: [deleted] 04 January 2013 08:32:14AM 0 points [-]

Forgive me if this sounds condescending, but isn't saying "0 and 1 are not probabilities because they won't let you update your knowledge" basically the same as saying "you can't know something because knowing makes you unable to learn"? If we assign tautologies as having probability 1, then anything reducible to a tautology should have probability 1 (and similarly, all contradictions and things reducible to contradictions should have probability 0). For any arbitrarily large N, if you put 2 apples next to 2 apples and repeat the test N times, you'll get 4 apples N out of N times, no less (discounting molecular breakdowns in the apples or other possible interferences).

Comment author: Qiaochu_Yuan 04 January 2013 10:09:44AM -1 points [-]

You shouldn't assign tautologies probability 1 either because your notion of what a tautology is might be a hallucination.

Comment author: RichardKennaway 04 January 2013 11:02:07AM *  2 points [-]

This confuses object level and meta level. In probability theory, P(-A|A) = 0 and P(A|A) = 1, however uncertain you may be about Cox's theorem, or about whether you are actually thinking about the same A each time it appears in those formulas. No-one, as far as I know, has ever constructed a theory of probability in which these are assigned anything else but 0 and 1. That is not to say that it cannot be done, only that it has not been done. Until that is done, 0 and 1 are probabilities.

The title of the article is a rhetorical flourish to convey the idea elaborated in its body, that to assert a probability, as a measure of belief, of 0 or 1 is to assert that no possible evidence could update that belief, that 0 and 1 are probabilities that you should not find yourself assigning to matters about which there could be any real dispute, and to suggest odds ratios or their logarithms as a better concept when dealing with practical matters associated with very low or very high probabilities. There is a very large difference between saying that the probability of winning a lottery is tiny and saying that it cannot happen at all; with enough participants it is almost certain to happen to someone. That difference is made clear by the log-odds scale, which puts the chance of a lottery ticket at 60 or more decibels below zero, not infinitely far below. In a world with 7 billion people, billion-to-1 chances happen every day.

As an example of even tinier probabilities which are still detectably different from zero, consider a typical computer. A billion transistors in its CPU, clocked a billion times a second, running for a conveniently round length of time, a million seconds, which is about 12 days. Computers these days can easily do that without a single hardware error, which means that for every one of a million billion billion switching events, a transistor opened or closed exactly as designed. A million billion billion is about 1.5 times Avogadro's number. The corresponding log-odds is -240 decibels. And yet hardware glitches can still happen.

And P(A|A) is still 1, not any finite number of decibels.

Comment author: sjmp 23 April 2013 12:34:07PM -3 points [-]

So you are saying that statement "0 and 1 are not probabilities" has probability of 1?

Comment author: Username 20 June 2014 01:00:43AM *  2 points [-]

O = (P / (1 - P))

probabilities and odds are isomorphic

This is undefined for P = 1. If you claim that that function is a real-valued bijection between probabilities and odds then P = 1 doesn't work so you're begging the question. Always take care to not divide by zero.

Whether or not real-world events can have a probability of 0 or 1 is a different question than "are 0 and 1 probabilities?". They most certainly are.