Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Will_Pearson comments on Belief in the Implied Invisible - Less Wrong

30 Post author: Eliezer_Yudkowsky 08 April 2008 07:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Will_Pearson 08 April 2008 09:53:32AM 0 points [-]

"In Solomonoff induction, the complexity of your model is the amount of code in the computer program you have to write to simulate your model. The amount of code, not the amount of RAM it uses, or the number of cycles it takes to compute."

What!? Are you assuming that everyone has on the exact same data on the position of the quarks of the universe stashed in a variable? The code/data divide is not useful, code can substitute for data and data for code (interpreted languages).

Let us say I am simulating the quarks and stuff for your region of space, I would like my friend bob to be able to make the same predictions (although most likely they would be postdictions as I wouldn't be able to make them in faster than real time) about you. I send him my program (sans quark positions), but he still can't simulate you. He needs the quark positions, they are as much code for the simulator as the physical laws.

Or to put it another way, quark positions are to physics simulators as the initial state of the tape is to a UTM simulator. That is code. Especially as physics simulations are computationally universal.

I personally don't put much stock by occams razor.

Comment author: [deleted] 04 May 2012 09:09:56PM 0 points [-]

In reality, all the computer program specifies is the simulation of a QM wave function (complex scalar field in an infinite dimensional hilbertspace with space curvature or something like that), along with the minimum message of the conditions of the big bang.

Comment author: dlthomas 04 May 2012 10:16:15PM *  1 point [-]

You confuse data, which should absolutely be counted (compressed) as complexity, with required RAM, which (EY asserts) should not.

I am well convinced that RAM requirements shouldn't be counted exclusively, and fairly well convinced that it shouldn't be counted similarly to rules; I am not convinced it shouldn't be counted at all. A log*(RAM) factor in the prior wouldn't make a difference for most judgements, but might tip the scale on MWI vs collapse. That said, I am not at all confident it does weigh in.