Oscar_Cunningham comments on A Proof of Occam's Razor - Less Wrong

3 Post author: Unknowns 10 August 2010 02:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (121)

You are viewing a single comment's thread.

Comment author: Oscar_Cunningham 10 August 2010 05:48:54PM 2 points [-]

Clearly as complexity goes to infinity the probability goes to 0. (i.e. For any epsilon>0 there exists n such that all hypotheses with complexity>n have probability less than epsilon. (since otherwise there would be infinitely many hypotheses with probability >epsilon, taking 1/epsilon of them would produce a collection of hypotheses with probability>1))

So as the post says, "on average" Occam's razor must be true. (As must any other Razor using a different measure of complexity, or flubbity, or whatever. However, message length seems to me to be the only "nice" measure to use on hypotheses.) But it doesn't say that the simplest hypotheses have to be ranked in complexity order. Of course it would be elegant if they were.

Why should we favour the elegant explanation?

Occam's razor of course!

Comment author: thomblake 11 August 2010 03:38:13AM 0 points [-]

Well taking this to be true, it seems like we could even construct an ordering where we enumerate algorithms based on something that maps to inverse complexity, so this would prove the opposite of Occam's razor.

But I can't think of any such ordering, and I've got a proof kicking around in my head that there can be no such ordering, so this hunch is probably wrong.

Comment author: Oscar_Cunningham 11 August 2010 07:18:06AM 2 points [-]

Indeed there is no such ordering.

Comment author: Nick_Tarleton 11 August 2010 07:40:28AM 3 points [-]

For any given inverse complexity, infinitely many algorithms have a lower inverse complexity. This seems to break any attempt to enumerate in that order, or assign decreasing nonzero probabilities that sum to 1.