SforSingularity comments on Mathematical simplicity bias and exponential functions - Less Wrong

12 Post author: taw 26 August 2009 06:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (82)

You are viewing a single comment's thread. Show more comments above.

Comment author: SforSingularity 29 August 2009 08:58:25PM 0 points [-]

There is a formal theory describing how to balance model complexity against fit to data: describe the model using a program on a simple, fixed turing machine, and then penalize that model by assigning a prior probability to it of 2^-L, where L is the length of the program...

this has all been worked out.

Comment author: Nick_Tarleton 29 August 2009 11:07:12PM 1 point [-]

Which Turing machine, though?

Comment author: SforSingularity 29 August 2009 11:38:14PM 0 points [-]

How about picking one of these?

Comment author: steven0461 29 August 2009 11:56:35PM 2 points [-]
Comment author: whpearson 29 August 2009 10:21:16PM 0 points [-]

It can't be bettered on average, assuming that the thing you are modelling is computable.

But I haven't seen any proof to say that any other strategy will do worse on average. Anyone got any links?

Comment author: SforSingularity 29 August 2009 10:23:54PM 0 points [-]

See Hutter, Legg.

Comment author: whpearson 29 August 2009 11:26:39PM *  0 points [-]

If I understand the maths right the important part of http://www.hutter1.net/ai/paixi.ps for using kolmogorov complexity is the part of section 2 that says

"The SPΘμ system is best in the sense that EnΘμ ≤ Enρ for any ρ."

That doesn't guarantee that this EnΘμ = Enρ for large numbers of different ρ. isn't true which would invalidate any claims of it being the one right way of doing things.

I was interested in links to papers with that theorem disproved.