Manfred comments on Logical uncertainty, kind of. A proposal, at least. - Less Wrong

8 Post author: Manfred 13 January 2013 09:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 14 January 2013 08:43:18PM *  0 points [-]

A nice way of thinking about it is that the robot can do unlimited probabilistic logic, but it only takes finite time because it's only working from a finite pool of proven theorems. When doing the probabilistic logic, the statements (e.g. A, B) are treated as atomic. So you can have effective inconsistencies, in that you can have an atom that says A, and an atom that says B, and an atom that effectively says 'AB', and unluckily end up with P('AB')>P(A)P(B). But you can't know you have inconsistencies in any way that would lead to mathematical problems. Once you prove that P('AB') = P(AB), where removing the quotes means breaking up the atom into an AND statement, then you can do probabilistic logic on it, and the maximum entropy distribution will no longer be effectively inconsistent.

Comment author: AlexMennen 14 January 2013 08:53:52PM 0 points [-]

Oh, I see. Do you know whether you can get different answers by atomizing the statements differently. For instance, will the same information always give the same resulting probabilities if the atoms are A and B as it would if the atoms are A and A-xor-B?

P('AB')>P(A)P(B)

Not a problem if A and B are correlated. I assume you mean P('AB')>min(P(A), P(B))?

Comment author: Manfred 15 January 2013 09:45:23AM 0 points [-]

Ah, right. Or even P('AB')>P(A).

You can't get different probabilities by atomizing things differently, all the atoms "already exist." But if you prove different theorems, or theorems about different things, then you can get different probabilities.