Baughn comments on Estimating the kolmogorov complexity of the known laws of physics? - Less Wrong

10 Post author: Strilanc 08 July 2013 04:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread.

Comment author: Baughn 08 July 2013 12:23:09PM *  0 points [-]

It's the program length that matters not it's time or memory performance

That's a common assumption, but does this really make sense?

It seems to me that if you have a program doing two things - simulating two people, say - then, if it has twice the overhead for the second one, the first should in some sense exist twice as much.

The same argument could be applied to universes.

Comment author: Baughn 09 July 2013 01:07:18AM 1 point [-]

Okay, I wouldn't normally do this, but.. what's with the downvote? I honestly have no idea what the problem is, which makes avoiding it hard. Please explain.

Comment author: Strilanc 08 July 2013 12:32:00PM 1 point [-]

You're saying a Solomonoff Inductor would be outperformed by a variant that weighted quick programs more favorably, I think. (At the very least, it makes approximations computable.)

Whether or not penalizing for space/time cost increases the related complexity metric of the standard model is an interesting question, and there's a good chance it's a large penalty since simulating QM seems to require exponential time, but for starters I'm fine with just an estimate of the Kolmogorov Complexity.

Comment author: Baughn 09 July 2013 01:02:10AM *  2 points [-]

Well, I'm saying the possibility is worth considering. I'm hardly going to claim certainty in this area.

As for QM...

The metric I think makes sense is, roughly, observer-moments divided by CPU time. Simulating QM takes exponential time, yes, but there's an equivalent exponential increase in the number of observer-moments. So QM shouldn't have a penalty vs. classical.

On the flip side this type of prior would heavily favor low-fidelity simulations, but I don't know if that's any kind of strike against it.

Comment author: gwern 08 July 2013 03:54:54PM 0 points [-]
Comment author: Baughn 09 July 2013 01:14:09AM 0 points [-]

"Use of the Speed Prior has the disadvantage of leading to less optimal predictions"

Unless I'm misunderstanding something, doesn't this imply we've already figured out that that's not the true prior? Which would be very interesting indeed.

Comment author: Eugine_Nier 09 July 2013 04:02:29AM 1 point [-]

Would someone mind explaining what "the true prior" is? Given that probability is in the mind, I don't see how the concept makes sense.

Comment author: Baughn 09 July 2013 09:33:16AM *  0 points [-]

I was going for "Matches what the universe is actually doing", whether that means setting it equal to the apparent laws of physics or to something like a dovetailer.

Sure, there's no way of being sure we've figured out the correct rule; doesn't mean there isn't one.

Comment author: Eugine_Nier 10 July 2013 02:23:54AM 0 points [-]

In other words it's the prior that's 1 on the actual universe and 0 on everything else.

Comment author: Baughn 10 July 2013 10:58:15AM 0 points [-]

Sure. A little hard to determine, fair enough.

Comment author: JoshuaZ 09 July 2013 04:31:13AM 0 points [-]

I'm confused also. I think they may mean something like "empirically not the optimal prior we can use with a small amount of computation" but that doesn't seem consistent with how it is being used.

Comment author: Eugine_Nier 09 July 2013 04:50:59AM 0 points [-]

"empirically not the optimal prior we can use with a small amount of computation"

I'm not even sure that makes sense since if this is based on empirical observations, presumable there was some prior prior that was updated based on those observations.

Comment author: JoshuaZ 09 July 2013 04:55:45AM 0 points [-]

Well, they could be using a set of distinct priors (say 5 or 6 of them) and then noting over time which set required less major updating in general, but I don't think this is what is going on either. We may need to just wait for Baughn to clarify what they meant.

Comment author: gwern 09 July 2013 02:01:00AM 0 points [-]

has the disadvantage of leading to less optimal predictions

As far as I know, any computable prior will have that disadvantage relative to full uncomputable SI.