Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Eliezer_Yudkowsky comments on Reductionism - Less Wrong

40 Post author: Eliezer_Yudkowsky 16 March 2008 06:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 05 February 2013 07:39:54PM 2 points [-]

Solomonoff induction is about putting probability distributions on observations - you're looking for the combination of the simplest program that puts the highest probability on observations. Technically, the original SI doesn't talk about causal models you're embedded in, just programs that assign probabilities to experiences.

Generalizing somewhat, for QM as it appears to humans, the generalized-SI-selected hypothesis would be something along the lines of one program that extrapolated the wavefunction, then another program that looked for people inside it and translated the underlying physics into the "observed data" from their perspective, then put probabilities on the sequences of data corresponding to integral squared modulus. Note that you also need an interface from atoms to experiences just to e.g. translate a classical atomic theory of matter into "I saw a blue sky", and an implicit theory of anthropics/sum-probability-measure too if the classical universe is large enough to have more than one copy of you.

Comment author: Kawoomba 05 February 2013 07:42:35PM 1 point [-]

Thanks for this. I'll mull it over.

Comment author: private_messaging 05 February 2013 10:29:42PM 1 point [-]
Comment author: whowhowho 05 February 2013 08:04:12PM 2 points [-]

It isn't at all clear why all that would add up to something simpler than a single world theory

Comment author: Eliezer_Yudkowsky 05 February 2013 08:08:19PM 8 points [-]

Single-world theories still have to compute the wavefunction, identify observers, and compute the integrated squared modulus. Then they have to pick out a single observer with probability proportional to the integral, peek ahead into the future to determine when a volume of probability amplitude will no longer strongly causally interact with that observer's local blob, and eliminate that blob from the wavefunction. Then translating the reductionist model into experiences requires the same complexity as before.

Basically, it's not simpler for the same reason that in a spatially big universe it wouldn't be 'simpler' to have a computer program that picked out one observer, calculated when any photon or bit of matter was moving away and wasn't going to hit anything that would reflect it back, and then eliminated that matter.