eli_sennesh comments on Treating anthropic selfish preferences as an extension of TDT - Less Wrong

9 Post author: Manfred 01 January 2015 12:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (16)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 02 January 2015 05:47:57PM *  2 points [-]

What probability do you give the simulation hypothesis?

Some extremely low prior based on its necessary complexity.

This is true - and I do think the probability of this is negligible.

No, you have no information about that probability. You can assign a complexity prior to it and nothing more.

Why do those conflict at all? I feel like you may be talking about a nonstandard use of occam's razor.

They conflict because you have two perspectives, and therefore two different sets of information, and therefore two very different distributions. Assume the scenario happens: the person running the simulation from outside has information about the simulation. They have the evidence necessary to defeat the low prior on "everything So and So experiences is a simulation". So and So himself... does not have that information. His limited information, from sensory data that exactly matches the real, physical, lawful world rather than the mutable simulated environment, rationally justifies a distribution in which, "This is all physically real and I am in fact not going to a tropical paradise in the next minute because I'm not a computer simulation" is the Maximum a Posteriori hypothesis, taking up the vast majority of the probability mass.

Comment author: Manfred 02 January 2015 08:08:14PM *  0 points [-]

So, the standard Bayesian analogue of Solomonoff induction is to put a complexity prior over computable predictions about future sensory inputs. If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs. I think it is unfounded substrate chauvinism to think that your sensory inputs are less complicated to specify than those of an uploaded copy of yourself.

Comment author: [deleted] 03 January 2015 10:31:33AM 1 point [-]

If the shortest program outputting your predictions looks like a specification of a physical world, and then an identification of your sensory inputs within that world, and the physical world in your model has both a meatspace copy of you and a simulated copy of you, the only difference in this Solomonoff-analogous prior between being a meat-person and a chip-person is the complexity of identifying your sensory inputs.

Firstly, this isn't a Solomonoff-analogous prior. It is the Solomonoff prior. Solomonoff Induction is Bayesian.

Secondly, my objection is that in all circumstances, if right-now-me does not possess actual information about uploaded or simulated copies of myself, then the simplest explanation for physically-explicable sensory inputs (ie: sensory inputs that don't vary between physical and simulated copies), the explanation with the lowest Kolmogorov complexity, is that I am physical and also the only copy of myself in existence at the present time.

This means that the 1000 simulated copies must arrive to an incorrect conclusion for rational reasons: the scenario you invented deliberately, maliciously strips them of any means to distinguish themselves from the original, physical me. A rational agent cannot be expected to necessarily win in adversarially-constructed situations.