Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Isaac comments on Confidence levels inside and outside an argument - Less Wrong

129 Post author: Yvain 16 December 2010 03:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (176)

You are viewing a single comment's thread. Show more comments above.

Comment author: Isaac 16 December 2010 03:49:19PM 9 points [-]

Surely declaring "x is impossible", before witnessing x, would be the most wrong you could be?

Comment author: katydee 16 December 2010 05:33:20PM *  25 points [-]

I take more issue with the people who incredulously shout "That's impossible!" after witnessing x.

Comment author: Nebu 30 March 2012 08:00:40PM 5 points [-]

I don't. You can witness a magician, e.g., violating conservation of matter, and still declare "that's impossible!"

Basically, you're stating that you don't believe that the signals your senses reported to you are accurate.

Comment author: benelliott 16 December 2010 05:54:19PM 9 points [-]

The colloquial meaning of "x is impossible" is probably closer to "x has probability <0.1%" than "x has probability 0"

Comment author: CynicalOptimist 18 November 2016 12:15:12AM 0 points [-]

This is good, but I feel like we'd better represent human psychology if we said:

Most people don't make a distinction between the concepts of "x has probability <0.1%" and "x is impossible".

I say this because I think there's an important difference between the times when people have a precise meaning in mind, which they've expressed poorly, and the times when people's actual concepts are vague and fuzzy. (Often, people don't realise how fuzzy their concepts are).

Comment author: Thomas 16 December 2010 06:45:46PM 0 points [-]

Probability zero and impossibility are not exactly the same thing. A possible event can have the probability 0. But an impossible event has the probability 0.

Comment author: benelliott 16 December 2010 06:51:58PM *  6 points [-]

You are referring to the mathematical definition of impossibility, and I am well aware of the fact that it is different from probability zero (flipping a coin forever without getting tails has probability zero but is not mathematically impossible). My point is that neither of those is actually what most people (as opposed to mathematicians and philosophers) mean by impossible.

Comment author: Eliezer_Yudkowsky 16 December 2010 05:13:40PM 9 points [-]

Probabilities of 1 and 0 are considered rule violations and discarded.

Comment author: kmccarty 17 December 2010 05:23:30AM 4 points [-]

Probabilities of 1 and 0 are considered rule violations and discarded.

What should we take for P(X|X) then?

And then what can I put you down for the probability that Bayes' Theorem is actually false? (I mean the theorem itself, not any particular deployment of it in an argument.)

Comment author: ata 17 December 2010 05:36:10AM *  12 points [-]

What should we take for P(X|X) then?

He's addressed that:

The one that I confess is giving me the most trouble is P(A|A). But I would prefer to call that a syntactic elimination rule for probabilistic reasoning, or perhaps a set equality between events, rather than claiming that there's some specific proposition that has "Probability 1".

and then

Huh, I must be slowed down because it's late at night... P(A|A) is the simplest case of all. P(x|y) is defined as P(x,y)/P(y). P(A|A) is defined as P(A,A)/P(A) = P(A)/P(A) = 1. The ratio of these two probabilities may be 1, but I deny that there's any actual probability that's equal to 1. P(|) is a mere notational convenience, nothing more. Just because we conventionally write this ratio using a "P" symbol doesn't make it a probability.

Comment author: kmccarty 17 December 2010 05:45:26PM 4 points [-]

Ah, thanks for the pointer. Someone's tried to answer the question about the reliability of Bayes' Theorem itself too I see. But I'm afraid I'm going to have to pass on this, because I don't see how calling something a syntactic elimination rule instead a law of logic saves you from incoherence.

Comment author: XiXiDu 18 December 2010 05:02:46PM 0 points [-]

I'd be interested to hear your thoughts on why you believe EY is incoherent? I thought that what EY said makes sense. Is the probability of a tautology being true 1? You might think that it is true by definition, but what if the concept is not even wrong, can you absolutely rule out that possibility? Your sense of truth by definition might be mistaken in the same way as the experience of a Déjà vu. The experience is real, but you're mistaken about its subject matter. In other words, you might be mistaken about your internal coherence and therefore assign a probability to something that was never there in the first place. This might be on-topic:

One can certainly imagine an omnipotent being provided that there is enough vagueness in the concept of what “omnipotence” means; but if one tries to nail this concept down precisely, one gets hit by the omnipotence paradox.

Nothing has a probability of 1, including this sentence, as doubt always remains, or does it? It's confusing for sure, someone with enough intellectual horsepower should write a post on it.

Comment author: kmccarty 19 December 2010 08:02:27AM 3 points [-]

Did I accuse someone of being incoherent? I didn't mean to do that, I only meant to accuse myself of not being able to follow the distinction between a rule of logic (oh, take the Rule of Detachment for instance) and a syntactic elimination rule. In virtue of what do the latter escape the quantum of sceptical doubt that we should apply to other tautologies? I think there clearly is a distinction between believing a rule of logic is reliable for a particular domain, and knowing with the same confidence that a particular instance of its application has been correctly executed. But I can't tell from the discussion if that's what's at play here, or if it is, whether it's being deployed in a manner careful enough to avoid incoherence. I just can't tell yet. For instance,

Conditioning on this tiny credence would produce various null implications in my reasoning process, which end up being discarded as incoherent

I don't know what this amounts to without following a more detailed example.

It all seems to be somewhat vaguely along the lines of what Hartry Field says in his Locke lectures about rational revisability of the rules of logic and/or epistemic principles; his arguments are much more detailed, but I confess I have difficulty following him too.

Comment author: Document 12 January 2011 07:47:24PM 0 points [-]

Althoug I'm not sure exactly what to say about it, there's some kind of connection here to Created Already in Motion and The Bedrock of Fairness - in each case you have an infinite regress of asking for a logical axiom justifying the acceptance of a logical axiom justifying the acceptance of a logical axiom, asking for fair treatment of people's ideas of fair treatment of people's ideas of fair treatment, or asking for the probability that a probability of a ratio of probabilities being correct is correct.

Comment author: Thomas 16 December 2010 08:38:45PM 1 point [-]

Probabilities of 1 and 0 are considered rule violations and discarded.

Is the probability for the correctness of this statement - smaller than 1?

Comment author: benelliott 16 December 2010 08:47:31PM *  5 points [-]

Obviously

Comment author: Thomas 16 December 2010 08:58:39PM 1 point [-]

So, you say, it's possible it isn't true?

Comment author: ata 16 December 2010 09:09:42PM *  6 points [-]

I would say that according to my model (i.e. inside the argument (in this post's terminology)), it's not possible that that isn't true, but that I assign greater than 0% credence to the outside-the-argument possibility that I'm wrong about what's possible.

(A few relevant posts: How to Convince Me That 2 + 2 = 3; But There's Still A Chance, Right?; The Fallacy of Gray)

Comment author: Thomas 16 December 2010 09:37:21PM 0 points [-]

How to Convince Me That 2 + 2 = 3

You can think for a moment, that 1024*10224=1048578. You can make an honest arithmetic mistake. More probable for bigger numbers, less probable for smaller. Very, very small for 2 + 2 and such. But I wouldn't say it's zero, and also not that the 0 is always excluded with the probability 1.

Exclusion of 0 and 1 implies, that this exclusion is not 100% certain. Kind of a probabilistic modus tollens.

Comment author: byrnema 16 December 2010 09:13:57PM *  0 points [-]

it's not possible that that isn't true

What is it that is true? (Just to clarify..)

Comment author: Thomas 16 December 2010 09:19:48PM -1 points [-]

This:

Probabilities of 1 and 0 are considered rule violations and discarded.

Discarding 0 and 1 from the game implies, that we have a positive probability - that they are wrongly excluded.

Comment author: benelliott 16 December 2010 09:00:09PM *  6 points [-]

Indeed

I get quite annoyed when this is treated as a refutation of the argument that absolute truth doesn't exist. Acknowledging that there is some chance that a position is false does not disprove it, any more than the fact that you might win the lottery means that you will.

Comment author: christopherj 18 November 2013 05:00:02AM *  1 point [-]

Someone claiming that absolute truths don't exist has no right to be absolutely certain of his own claim. This of course has no bearing on the actual truth of his claim, nor the truth of the supposed absolute truth he's trying to refute by a fully generic argument against absolute truths.

I rather prefer Eliezer's version, that confidence of 2^n to 1, requires [n - log base 2 of prior odds] bits of evidence to be justified. Not only does this essentially forbid absolute certainty (you'd need infinite evidence to justify absolute certainty), but it is actually useful for real life.