Tetronian comments on Stupid Questions Open Thread - Less Wrong

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

Sort By: Popular

You are viewing a single comment's thread.

Comment author: [deleted] 30 December 2011 01:10:30PM 11 points [-]

I would like someone who understands Solomonoff Induction/the univeral prior/algorithmic probability theory to explain how the conclusions drawn in this post affect those drawn in this one. As I understand it, cousin_it's post shows that the probability assigned by the univeral prior is not related to K-complexity; this basically negates the points Eliezer makes in Occam's Razor and in this post. I'm pretty stupid with respect to mathematics, however, so I would like someone to clarify this for me.

Comment author: Erebus 03 January 2012 10:47:27AM 1 point [-]

Solomonoff's universal prior assigns a probability to every individual Turing machine. Usually the interesting statements or hypotheses about which machine we are dealing with are more like "the 10th output bit is 1" than "the machine has the number 643653". The first statement describes an infinite number of different machines, and its probability is the sum of the probabilities of those Turing machines that produce 1 as their 10th output bit (as the probabilities of mutually exclusive hypotheses can be summed). This probability is not directly related to the K-complexity of the statement "the 10th output bit is 1" in any obvious way. The second statement, on the other hand, has probability exactly equal to the probability assigned to the Turing machine number 643653, and its K-complexity is essentially (that is, up to an additive constant) equal to the K-complexity of the number 643653.

So the point is that generic statements usually describe a huge number of different specific individual hypotheses, and that the complexity of a statement needed to delineate a set of Turing machines is not (necessarily) directly related to the complexities of the individual Turing machines in the set.

Comment author: Manfred 02 January 2012 07:56:11AM *  1 point [-]

I don't think there's very much conflict. The basic idea of cousin-it's post is that the probabilities of generic statements are not described by a simplicity prior. Eliezer's post is about the reasons why the probabilities of every mutually exclusive explanation for your data should look like a simplicity prior (an explanation is a sort of statement, but in order for the arguments to work, you can't assign probabilities to any old explanations - they need to have this specific sort of structure).

Comment author: Will_Newsome 01 January 2012 02:32:08AM 1 point [-]

Stupid question: Does everyone agree that algorithmic probability is irrelevant to human epistemic practices?

Comment author: torekp 02 January 2012 01:59:59AM 1 point [-]

I see it as a big open question.

Comment author: [deleted] 01 January 2012 06:20:55AM 0 points [-]

I don't think it's a clear-cut issue. Algorithmic probability seems to be the justification for several Sequence posts, most notably this one and this one. But, again, I am stupid with respect to algorithmic probability theory and its applications.

Comment author: Will_Newsome 03 January 2012 01:43:54AM -1 points [-]