Jiro comments on Logic as Probability - Less Wrong

9 Post author: Manfred 08 February 2014 06:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: VAuroch 11 February 2014 08:43:18AM *  -1 points [-]

It was your example, not mine.

No, you butchered it into a different example. Introduced the Lewis Carroll Paradox, even.

You haven't convinced nor shown me that this violates Cox's theorem.

He showed you. You weren't paying attention.

Here, the robot is not in the limit of certainty because it cannot compute the required proof.

It can compute the proof. The laws of inference are axioms; P(A|B) is necessarily known a priori.

such that P("wet outside"|"rain")=0.5 until such time as it has obtained a proof that "rain" correlates 100% with "wet outside".

There is no such time. Either it's true initially, or it will never be established with certainty. If it's true initially, that's because it is an axiom. Which was the whole point.

Comment author: Jiro 11 February 2014 09:40:27AM 0 points [-]

The laws of inference are axioms; P(A|B) is necessarily known a priori.

It does not follow that because someone knows some statements they also know the logical consequences of those statements.

Comment author: VAuroch 11 February 2014 09:54:20AM -1 points [-]

When the someone is an idealized system of logic, it does. And we're discussing an idealized system of logic here. So it does.

Comment author: Kurros 11 February 2014 10:20:52AM 0 points [-]

No we aren't, we're discussing a robot with finite resources. I obviously agree that an omnipotent god of logic can skip these problems.

Comment author: VAuroch 11 February 2014 10:29:37AM -1 points [-]

The limitation imposed by the bounded resources are the next entry in the sequence. For this, we're still discussing the unbounded case.

Comment author: Kurros 11 February 2014 10:43:37AM 0 points [-]

Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.