Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Lumifer comments on 0 And 1 Are Not Probabilities - Less Wrong

34 Post author: Eliezer_Yudkowsky 10 January 2008 06:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (128)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 20 August 2015 02:44:08PM *  0 points [-]

Eliezer isn't arguing with the mathematics of probability theory. He is saying that in the subjective sense, people don't actually have absolute certainty.

Errr... as I read EY's post, he is certainly talking about the mathematics of probability (or about the formal framework in which we operate on probabilities) and not about some "subjective sense".

The claim of "people don't actually have absolute certainty" looks iffy to me, anyway. The immediate two questions that come to mind are (1) How do you know? and (2) Not even a single human being?

Comment author: Bound_up 20 August 2015 03:23:35PM *  1 point [-]

I think he's just acknowledging the minute(?) possibility that our apparently flawless reasoning could have a blind spot. We could be in a Matrix, or have something tampering with our minds, etcetera, such that the implied assertion:

If this appears absolutely certain to me

Then it must be true

is indefensible.

Comment author: Lumifer 20 August 2015 03:43:59PM 0 points [-]

There are two different things.

David_Bolin said (emphasis mine): "He is saying that in the subjective sense, people don't actually have absolute certainty." I am interpreting this as "people never subjectively feel they have absolute certainty about something" which I don't think is true.

You are saying that from an external ("objective") point of view, people can not (or should not) be absolutely sure that their beliefs/conclusions/maps are true. This I easily agree with.

Comment author: David_Bolin 20 August 2015 07:08:55PM 0 points [-]

It should probably be defined by calibration: do some people have a type of belief where they are always right?

Comment author: Lumifer 20 August 2015 07:36:53PM -1 points [-]

Self-referential and anthropic things would probably qualify, e.g. "I believe I exist".

Comment author: StellaAthena 20 August 2015 08:33:17PM -1 points [-]

You can phrase statements of logical deduction such that they have no premises and only conclusions. If we let S be the set of logical principles under which our logical system operates and T be some sentence that entails Y, then S AND T implies Y is something that I have absolute certainty in, even if this world is an illusion, because the premise of the implication contains all the rules necessary to derive the result.

A less formal example of this would be the sentence: If the rules of logic as I know them hold and the axioms of mathematics are true, then it is the case that 2+2=4

Comment author: Gram_Stone 20 August 2015 03:50:44PM 0 points [-]

The claim of "people don't actually have absolute certainty" looks iffy to me, anyway. The immediate two questions that come to mind are (1) How do you know? and (2) Not even a single human being?

The way I view that statement is: "In our formalization, agents with absolutely certain beliefs cannot change those beliefs, we want our formalization to capture our intuitive sense of how an ideal agent would update its beliefs, a formalization with a quality of fanaticism does not capture our intuitive sense of how an ideal agent would update its beliefs, therefore we do not want a quality of fanaticism."

And what state of the world would correspond to the statement "Some people have absolute certainty." ? Do you think that we can take some highly advanced and entirely fictional neuroimaging technology, look at a brain and meaningfully say, "There's a belief with probability 1." ?

And on the other hand, I'm not afraid to talk about folk certainty, where the properties of an ideal mathematical system are less relevant, where everyone can remain blissfully logically uncertain to the fact that beliefs with probability 1 and 0 imply undesirable consequences in formal systems that possess them, and say things like "I believe that absolutely." I am not afraid to say something like, "That person will not stop believing that for as long as he lives," and mean that I predict with high confidence that that person will not stop believing that for as long as he lives.

And once you believe that the formalization is trying to capture our intuitive sense of an ideal agent, and decide whether or not that quality of fanaticism captures it, and decide whether or not you're going to be a stickler about folk language, then I don't think that any question or confusion around that claim remains.

Comment author: Lumifer 20 August 2015 03:57:58PM 1 point [-]

People are not "ideal agents". If you specifically construct your formalization to fit your ideas of what an ideal agent should and should not be able to do, this formalization will be a poor fit to actual, live human beings.

So either you make a system for ideal agents -- in which case you'll still run into some problems because, as has been pointed out upthread, standard probability math stops working if you disallow zeros and ones -- or you make a system which is applicable to our imperfect world with imperfect humans.

Comment author: Gram_Stone 20 August 2015 09:59:02PM 1 point [-]

I don't see why both aren't useful. If you want a descriptive model instead of a normative one, try prospect theory.

I just don't see this article as an axiom that says probabilities of 0 and 1 aren't allowed in probability theory. I see it as a warning not to put 0s and 1s in your AI's prior. You're not changing the math so much as picking good priors.

Comment author: Wes_W 20 August 2015 05:02:33PM 0 points [-]

If we're asking what the author "really meant" rather than just what would be correct, it's on record.

The argument for why zero and one are not probabilities is not, "All objects which are special cases should be cast out of mathematics, so get rid of the real zero because it requires a special case in the field axioms", it is, "ceteris paribus, can we do this without the special case?" and a bit of further intuition about how 0 and 1 are the equivalents of infinite probabilities, where doing our calculations without infinities when possible is ceteris paribus regarded as a good idea by certain sorts of mathematicians. E.T. Jaynes in "Probability Theory: The Logic of Science" shows how many probability-theoretic errors are committed by people who assume limits directly into their calculations, without first showing the finite calculation and then finally taking its limit. It is not unreasonable to wonder when we might get into trouble by using infinite odds ratios. Furthermore, real human beings do seem to often do very badly on account of claiming to be infinitely certain of things so it may be pragmatically important to be wary of them.

I... can't really recommend reading the entire thread at the link, it's kind of flame-war-y and not very illuminating.

Comment author: EHeller 20 August 2015 05:14:30PM *  3 points [-]

I think the issue at hand is that 0 and 1 aren't special cases at all, but very important for the math of probability theory to work (try and construct a probability measure where some subset doesn't have probability 1 or 0).

This is incredibly necessary for the mathematical idea of probability ,and EY seems to be confusing "are 0 and 1 probabilities relevant to Bayesian agents?" with "are 0 and 1 probabilities?" (yes, they are, unavoidably, not as a special case!).

Comment author: Lumifer 20 August 2015 05:18:06PM *  0 points [-]

It seems that EY position boils down to

Pragmatically speaking, the real question for people who are not AI programmers is whether it makes sense for human beings to go around declaring that they are infinitely certain of things. I think the answer is that it is far mentally healthier to go around thinking of things as having 'tiny probabilities much larger than one over googolplex' than to think of them being 'impossible'.

And that's a weak claim. EY's ideas of what is "mentally healthier" are, basically, his personal preferences. I, for example, don't find any mental health benefits in thinking about one over googolplex probabilities.

Comment author: Wes_W 20 August 2015 05:27:16PM *  0 points [-]

Cromwell's Rule is not EY's invention, and relatively uncontroversial for empirical propositions (as opposed to tautologies or the like).

If you don't accept treating probabilities as beliefs and vice versa, then this whole conversation is just a really long and unnecessarily circuitous way to say "remember that you can be wrong about stuff".

Comment author: EHeller 20 August 2015 05:44:34PM 2 points [-]

The part that is new compared to Cromwell's rule is that Yudkowsky doesn't want to give probability 1 to logical statements (53 is a prime number).

Because he doesn't want to treat 1 as a probability, you can't expect complete sets of events to have total probability 1, despite them being tautologies. Because he doesn't want probability 0, how do you handle the empty set? How do you assign probabilities to statements like "A and B" where A and B are logical exclusive? (the coin lands heads AND the coin lands tails).

Removing 0 and 1 from the math of probability breaks most of the standard manipulations. Again, it's best to just say "be careful with 0 and 1 when working with odds ratios."

Comment author: Lumifer 20 August 2015 05:48:30PM 1 point [-]

Nobody is saying EY invented Cromwell's Rule, that's not the issue.

The issue is that "0 and 1 are not useful subjective certainties for a Bayesian agent" is a very different statement than "0 and 1 are not probabilities at all".

Comment author: Wes_W 20 August 2015 06:05:37PM *  0 points [-]

You're right, I misread your sentence about "his personal preferences" as referring to the whole claim, rather than specifically the part about what's "mentally healthy". I don't think we disagree on the object level here.

Comment author: David_Bolin 20 August 2015 06:50:57PM 0 points [-]

Of course if no one has absolute certainty, this very fact would be one of the things we don't have absolute certainty about. This is entirely consistent.