cousin_it comments on The prior of a hypothesis does not depend on its complexity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (59)
A prior - especially one as fundamental as the universal prior - must be coherent from the start. Inspecting your hypotheses more closely without looking at the universe should never cause "updating".
To be fair, would you agree that this is a big weakness of probability theory? It can be really difficult to sort out some of those boolean combinations.
Edit: Why the downvote?
I'm pretty sure it doesn't break probability theory per se, only limits its applicability. For example, Wei Dai wants UDT to use a "mathematical intuition module" that assigns degrees of belief to math statements, while I'm skeptical of the idea.
I argued here that it means that an AI can't compute its own prior. That argument is about all mathematical statements. If we just restrict to boolean combinations on a finite sample space then I think a similar argument shows that an AI can't compute its own prior in polynomial time, unless P = NP.
Edit: Wait, that last might be boneheaded. For given inputs we can compute Boolean expressions pretty efficiently, I think we just can't efficiently determine whether they're tautologies.
That depends on what statements are "expressible" by the AI. If it can use quantification ("this boolean formula is true for some/all inputs"), computing the prior becomes NP-hard. An even trickier case: imagine you program the AI with ZFC and ask it for its prior about the continuum hypothesis? On the other hand, you may use clever programming to avoid evaluating prior values that are difficult or impossible to evaluate, like Monte Carlo AIXI does.
Far out.