Patrick comments on Take heed, for it is a trap - Less Wrong

47 Post author: Zed 14 August 2011 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (187)

You are viewing a single comment's thread. Show more comments above.

Comment author: Patrick 14 August 2011 01:00:32PM 0 points [-]

I'm not sure why you'd assume that the MML of a random proposition is only one bit...

Comment author: Zed 14 August 2011 01:19:11PM *  4 points [-]

A complex proposition P (long MML) can have a complex negation (also with long MML) and you'd have no reason to assume you'd be presented with P instead of non-P. The positive proposition P is unlikely if its MML is long, but the proposition non-P, despite its long MML is then likely to be true.

If you have no reason to believe you're more likely to be presented with P than with non-P, then my understanding is that they cancel each other out.

But now I'm not so sure anymore.

edit: I'm now pretty sure again my initial understanding was correct and that the counterarguments are merely cached thoughts.

Comment author: benelliott 14 August 2011 03:37:25PM 3 points [-]

I think often "complicated proposition" is used to mean "large conjunction" e.g. A&B&C&D&...

In this case its negation would be a large disjunction, and large disjunctions, while in a sense complex (it may take a lot of information to specify one) usually have prior probabilities close to 1, so in this case complicated statements definitely don't get probability 0.5 as a prior. "Christianity is completely correct" versus "Christianity is incorrect" is one example of this.

On this other hand, if by 'complicated proposition' you just mean something where its truth depends on lots of factors you don't understand well, and is not itself necessarily a large conjunction, or in any way carrying burdensome details, then you may be right about probability 0.5. "Increasing government spending will help the economy" versus "increasing government spending will harm the economy" seems like an example of this.

Comment author: Zed 14 August 2011 04:58:50PM *  2 points [-]

My claim is slightly stronger than that. My claim is that the correct prior probability of any arbitrary proposition of which we know nothing is 0.5. I'm not restricting my claim to propositions which we know are complex and depend on many factors which are difficult to gauge (as with your economy example).

Comment author: benelliott 14 August 2011 05:18:33PM 1 point [-]

I think I mostly agree. It just seemed like the discussion up to that point had mostly been about complex claims, and so I confined myself to them.

However, I think I cannot fully agree about any claim of which we know nothing. For instance, I might know nothing about A, nothing about B|A, and nothing about A&B, but for me to simultaneously hold P(A) = 0.5, P(B|A) = 0.5 and P(A&B) = 0.5 would be inconsistent.

Comment author: komponisto 15 August 2011 06:41:30AM 3 points [-]

I might know nothing about A, nothing about B|A, and nothing about A&B, but for me to simultaneously hold P(A) = 0.5, P(B|A) = 0.5 and P(A&B) = 0.5 would be inconsistent.

"B|A" is not a proposition like the others, despite appearing as an input in the P() notation. P(B|A) simply stands for P(A&B)/P(A). So you never "know nothing about B|A", and you can consistently hold that P(A) = 0.5 and P(A&B) = 0.5, with the consequence that P(B|A) = 1.

The notation P(B|A) is poor. A better notation would be P_A(B); it's a different function with the same input, not a different input into the same function.

Comment author: benelliott 15 August 2011 02:51:29PM 2 points [-]

Fair enough, although I think my point stands, it would be fairly silly if you could deduce P(A|B) = 1 simply from the fact that you know nothing about A and B.

Comment author: komponisto 15 August 2011 04:35:08PM *  3 points [-]

it would be fairly silly if you could deduce P(A|B) = 1 simply from the fact that you know nothing about A and B.

Well, you can't -- you would have to know nothing about B and A&B, a very peculiar situation indeed!

EDIT: This is logically delicate, but perhaps can be clarified via the following dialogue:

-- What is P(A)?

-- I don't know anything about A, so 0.5

-- What is P(B)?

-- Likewise, 0.5

-- What is P(C)?

-- 0.5 again.

-- Now compute P(C)/P(B)

-- 0.5/0.5 = 1

-- Ha! Gotcha! C is really A&B; you just said that P(A|B) is 1!

-- Oh; well in that case, P(C) isn't 0.5 any more: P(C|C=A&B) = 0.25.

As per my point above, we should think of Bayesian updating as the function P varying, rather than its input.

Comment author: Tyrrell_McAllister 16 August 2011 07:24:49PM 0 points [-]

EDIT: This is logically delicate, but perhaps can be clarified via the following dialogue:

I believe that this dialogue is logically confused, as I argue in this comment.

Comment author: benelliott 15 August 2011 05:32:30PM 0 points [-]

This is the same confusion I was originally having with Zed. Both you and he appear to consider knowing the explicit form of a statement to be knowing something about the truth value of that statement, whereas I think you can know nothing about a statement even if you know what it is, so you can update on finding out that C is a conjunction.

Given that we aren't often asked to evaluate the truth of statements without knowing what they are, I think my sense is more useful.

Comment author: komponisto 15 August 2011 06:26:12PM 0 points [-]

I think you can know nothing about a statement even if you know what it is, so you can update on finding out that C is a conjunction.

Did you mean "can't"? Because "can" is my position (as illustrated in the dialogue!).

Given that we aren't often asked to evaluate the truth of statements without knowing what they are, I think my sense is more useful.

This exemplifies the point in my original comment:

Of course, we almost never reach this level of ignorance in practice, which makes this the type of abstract academic point that people all-too-characteristically have trouble with. The step of calculating the complexity of a hypothesis seems "automatic", so much so that it's easy to forget that there is a step there.

Comment author: Zed 14 August 2011 05:27:15PM *  0 points [-]

If you know nothing of A and B then P(A) = P(B) = 0.5, P(B|A) = P(A|B) = 0.5 and P(A & B) = P(A|B) * P(B) = 0.25

You do know something of the conjunction of A and B (because you presume they're independent) and that's how you get to 0.25.

I don't think there's an inconsistency here.

Comment author: benelliott 14 August 2011 07:14:29PM 0 points [-]

How do you know something about the conjunction? Have you manufactured evidence from a vacuum?

I don't think I am presuming them independent, I am merely stating that I have no information to favour a positive or negative correlation.

Look at it another way, suppose A and B are claims that I know nothing about. Then I also know nothing about A&B, A&(~B), (~A)&B and (~A)&(~B) (knowledge about any one of those would constitute knowledge about A and B). I do not think I can consistently hold that those four claims all have probability 0.5.

Comment author: [deleted] 17 August 2011 06:35:15PM 2 points [-]

If you know nothing about A and B, then you know something about A&B. You know it is the conjunction of two things you know nothing about.

Comment author: Jack 17 August 2011 07:53:27PM 0 points [-]

Since A=B is a possibility the uses of "two things" here is bit specious. You're basically saying you know A&B but that could stand for anything at all.

Comment author: [deleted] 17 August 2011 11:41:17PM 0 points [-]

You know that either A and B are highly correlated (one way or the other) or P(A&B) is close to P(A) P(B).

Comment author: benelliott 17 August 2011 06:52:57PM 0 points [-]

Yeah, and I know that A is the disjunction of A&B and A&(~B), and that it is the negation of the negation of a proposition I know nothing about, and lots of other things. If we reading a statement and analysing its logical consequences to count as knowledge then we know infinitely many things about everything.

Comment author: Zed 14 August 2011 07:48:50PM *  0 points [-]

In that case it's clear where we disagree because I think we are completely justified in assuming independence of any two unknown propositions. Intuitively speaking, dependence is hard. In the space of all propositions the number of dependent pairs of propositions is insignificant compared to the number of independent pairs. But if it so happens that the two propositions are not independent then I think we're saved by symmetry.

There are a number of different combinations of A and ~A and B and ~B but I think that their conditional "biases" all cancel each other out. We just don't know if we're dealing with A or with ~A, with B or with ~B. If for every bias there is an equal and opposite bias, to paraphrase Newton, then I think the independence assumption must hold.

Suppose you are handed three closed envelopes each containing a concealed proposition. Without any additional information I think we have no choice but to assign each unknown proposition probability 0.5. If you then open the third envelope and if it reads "envelope-A & envelope-B" then the probability of that proposition changes to 0.25 and the other two stay at 0.5.

If not 0.25, then which number do you think is correct?

Comment author: benelliott 15 August 2011 12:34:53AM 3 points [-]

Okay, in that case I guess I would agree with you, but it seems a rather vacuous scenario. In real life you are almost never faced with the dilemma of having to evaluate the probability of a claim without even knowing what that claim is, it appears in this case that when you assign a probability of 0.5 to an envelope you are merely assigning 0.5 probability to the claim that "whoever filled this envelope decided to put a true statement in".

When, as in almost all epistemological dilemmas, you can actually look at the claim you are evaluating, then even if you know nothing about the subject area you should still be able to tell a conjunction from a disjunction. I would never, ever apply the 0.5 rule to an actual political discussion, for example, where almost all propositions are large logical compounds in disguise.

Comment author: Jack 17 August 2011 07:50:29PM 0 points [-]

This can't be right. An unspecified hypothesis can be as many sentence letters and operators as you like, we still don't have any information about it's content and so can't have any P other than 0.5. Take any well-formed formula in propositional logic. You can make that formula say anything you want by the way you assign semantic content to the sentence letters (for propositional logical, not the predicate calculus where can specify indpendence). We have conventions where we don't do silly things like say "A AND ~B" and then have B come out semantically equivalent to ~A. It is also true that two randomly chosen hypotheses from a large set of mostly independent hypotheses are likely to be independent. But this is a judgment that requires knowing something about the hypothesis: which we don't, by stipulation. Note, it isn't just causal dependence we're worried about here: for all we know A and B are semantically identical. By stipulation we know nothing about the system we're modeling- the 'space of all propositions' could be very small.

The answer for all three envelopes is, in the case of complete ignorance, 0.5.

Comment author: Zed 17 August 2011 09:29:52PM *  0 points [-]

I think I agree completely with all of that. My earlier post was meant as an illustration that once you say C = A & B that you're no longer dealing with a state of complete ignorance. You're in complete ignorance of A and B, but not of C. In fact, C is completely defined as being the conjunction of A and B. I used the illustration of an envelope because as long as the envelope is closed you're completely ignorant about its contents (by stipulation) but once you open it that's no longer the case.

The answer for all three envelopes is, in the case of complete ignorance, 0.5.

So the probability that all three envelopes happen to contain a true hypothesis/proposition is 0.125 based on the assumption of independence. Since you said "mostly independent" does that mean you think we're not allowed to assume complete independence? If the answer isn't 0.125, what is it?

edit:

If your answer to the above is "still 0.5" then I have another scenario. You're in total ignorance of A. B denotes the probability of rolling a a 6 on a regular die. What's the probability that A & B are true? I'd say it has to be 1/12, even though it's possible that A and B are not independent.

Comment author: Patrick 14 August 2011 03:34:24PM 0 points [-]

The number of possible probability distributions is far larger than the two induced by the belief that P, and the belief that ~P.

Comment author: Zed 14 August 2011 04:32:43PM 0 points [-]

If at this point you don't agree that the probability is 0.5 I'd like to hear your number.

Comment author: Patrick 14 August 2011 05:34:55PM 3 points [-]

P(A) = 2^-K(A).

As for ~A, see: http://lesswrong.com/lw/vs/selling_nonapples/ (The negation of a complex proposition is much vaguer, and hence more probable (and useless))