Manfred comments on Logic as Probability - Less Wrong

9 Post author: Manfred 08 February 2014 06:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kurros 11 February 2014 01:54:26AM *  -1 points [-]

You haven't been very specific about what you think I'm doing incorrectly so it is kind of hard to figure out what you are objecting to. I corrected your example to what I think it should be so that it satisfies the product rule; where's the problem? How do you propose that the robot can possibly set P("wet outside"|"rain")=1 when it can't do the calculation?

Comment author: Manfred 11 February 2014 05:59:28AM 0 points [-]

In your example, it can't. Because the axioms you picked do not determine the answer. Because you are incorrectly translating classical logic into probabilistic logic. And then, as one would expect, your translation of classical logic doesn't reproduce classical logic.

Comment author: Kurros 11 February 2014 06:51:42AM *  -1 points [-]

It was your example, not mine. But you made the contradictory postulate that P("wet outside"|"rain")=1 follows from the robots prior knowledge and the probability axioms, and simultaneously that the robot was unable to compute this. To correct this I alter the robots probabilities such that P("wet outside"|"rain")=0.5 until such time as it has obtained a proof that "rain" correlates 100% with "wet outside". Of course the axioms don't determine this; it is part of the robots prior, which is not determined by any axioms.

You haven't convinced nor shown me that this violates Cox's theorem. I admit I have not tried to follow the proof of this theorem myself, but my understanding was that the requirement you speak of is that the probabilistic logic reproduces classical logic in the limit of certainty. Here, the robot is not in the limit of certainty because it cannot compute the required proof. So we should not expect to get the classical logic until updating on the proof and achieving said certainty.

Comment author: VAuroch 11 February 2014 08:43:18AM *  -1 points [-]

It was your example, not mine.

No, you butchered it into a different example. Introduced the Lewis Carroll Paradox, even.

You haven't convinced nor shown me that this violates Cox's theorem.

He showed you. You weren't paying attention.

Here, the robot is not in the limit of certainty because it cannot compute the required proof.

It can compute the proof. The laws of inference are axioms; P(A|B) is necessarily known a priori.

such that P("wet outside"|"rain")=0.5 until such time as it has obtained a proof that "rain" correlates 100% with "wet outside".

There is no such time. Either it's true initially, or it will never be established with certainty. If it's true initially, that's because it is an axiom. Which was the whole point.

Comment author: Jiro 11 February 2014 09:40:27AM 0 points [-]

The laws of inference are axioms; P(A|B) is necessarily known a priori.

It does not follow that because someone knows some statements they also know the logical consequences of those statements.

Comment author: VAuroch 11 February 2014 09:54:20AM -1 points [-]

When the someone is an idealized system of logic, it does. And we're discussing an idealized system of logic here. So it does.

Comment author: Kurros 11 February 2014 10:20:52AM 0 points [-]

No we aren't, we're discussing a robot with finite resources. I obviously agree that an omnipotent god of logic can skip these problems.

Comment author: VAuroch 11 February 2014 10:29:37AM -1 points [-]

The limitation imposed by the bounded resources are the next entry in the sequence. For this, we're still discussing the unbounded case.

Comment author: Kurros 11 February 2014 10:43:37AM 0 points [-]

Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.