Introduction.

I've been thinking about Anthropic Arguments recently, and noticed a disturbing lack of hard hitting (as in, can be shown to be true or false given our knowledge of physics and such) thought experiments on the topic.

The self sampling assumption is as follows:

All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.

In this post, I'm using the reference class of "Algorithms identical to you, but perhaps in different times and places"- that seems the least objectionable. This includes exact simulated copies of you and exact physical duplicates of you.

The self sampling assumption claims that our experiences are probabilistic evidence about the observations of people in the reference class we're in. If  predicts that observers like us exist and a rather large proportion of observers, and  predicts that observers like us exist but are a very small proportion of observers, ceteris paribus we can choose  over . Our experiences provide meaningful evidence in this pretty unobjectionable and intuitive way.

There are real world consequences of accepting this sort of reasoning, some of which punish us with physics verified irrationality- we will act in ways that we can verify in advance and in retrospect cannot be based on any communicated information, and with more contrived examples can verifiably make us incorrectly update based on specific information more often than we correctly update, even if we know all the relevant information.

This allows us to glimpse at the underlying problems with our intuitions about some sorts of anthropic reasoning.

Consider the following hypothetical:

There are a million exactly perfect copies of you (including you, of course) throughout the universe, all of which are non-local to one another- their futures do not intersect.

Now, consider the following two hypotheses to which we've each assigned probability exactly .5 for simplicity: 

  1. Scenario 1: All of your copies (including you) are exactly identical and will always be exactly identical. Their entire observable universe has been and always will be precisely the same, and they will not see a bright pink pop-up on their screen roughly a second after they read the sentence below these two hypotheses.
  2. Scenario 2: All of your copies (including you) are exactly identical until roughly a second after they read the sentence below these two hypotheses, at which point all but one of the copies get a bright pink pop-up on their screen saying "You were one of the 999,999!"

If you don't see a bright pink pop-up on your screen, should you update in favor of Hypothesis 1 and against Hypothesis 2? In a way, you're more likely to have not seen the pop-up if you're in scenario 1. 

It initially seems fair to me to say:

So, you're in Scenario 1, right? In fact, this naively seems (at least to me) to be very fair. 

However:

You can presumably do something like this in real life! Make a bunch of probes with the information necessary to run exactly identical simulations of you in an exactly identical environment which will evolve exactly the same into the indefinite future, and have them fall out of your cosmological horizon. 

If any of the probes have seen <event x> by <time x> (You can basically ignore the time part of this, I'm including time to make the argument more intuitive), have them begin to tile their light cones with computational substrate prepared to run copies of you which will see a pink pop-up appear five seconds into the simulation. 

At <time x + y> where y is enough time for the probes who saw <event x> to build enough computers for you to agree that the fact you don't see a pink pop-up appear provides strong justified evidence that none of the probes saw <event x>, begin all of the simulations, including the ones on the probes that didn't see <event x>.

If you don't see a pop-up, and you think this somehow allows you to justifiably update in favor of no probes having seen <event x>, you're claiming that your brain is corresponding to phenomena entirely non-local to you.

The no pop-up you can't update on which happened based on the fact they're a no pop-up person (except in the trivial way that they can infer that at least one probe didn't see <event x> by <time x> with ~1 probability, because the probes who see <event x> by <time x> show the pop-ups to all of their copies with almost zero error), and the pop-up people can update towards ~1 probability of at least one of the probes having seen <event x> by <time x>.

If you don't see the pop-up, you will update exactly the same way, regardless of which happened. That's what's actually happening- there's no change to how you update based on the thing you're trying to update on, so you're not doing meaningful cognitive work.

The other members of your reference class don't need to be outside of your cosmological horizon either. If your method makes you update your probabilities in some way which does not respect locality, (For example, allowing you to update faster than the speed of light) it can't work.

In fact, you can communicate across time too with very similar schemes. And universes, branches of the universal wavefunction, heavenly spheres, etc. Your model isn't respecting position in general. This is reductio ad absurdum of this entire idea (Namely, the self sampling assumption and the broader zone of intuitions like this). 

The fact that you're experiencing or not experiencing something cannot be treated as evidence for how likely it was for copies of you to experience or not experience that thing. You can't reason as if you were randomly selected from the pool of all actually existent observers in your reference class- if you do, you can draw conclusions about what your reference class looks like in weird physics breaking ways. 

If you look closely at an actual example of this effect, we can tell that the self sampling assumption doesn't allow you to gain any information you don't already have. You could always change the reference class, but to what? I think you need to change it to "exactly you" in order to avoid making unjustified claims about information you don't have access to, which defeats the entire purpose.

There are even sketchier versions of this "type" of reasoning such as the Doomsday Argument. The Doomsday Argument can't allow you to justifiably update for very similar reasons.

What if...

"What if, before you halt your simulation and copy yourself into the probes, you conclude that <event x> is going to be observed by at least one probe with probability 99%. What probability estimate should you use for seeing a pink pop-up? Shouldn't it be really high?"

Yes, but I would already believe that. Anthropic reasoning isn't giving me any evidence.

"But if you don't see a pink pop-up, are you going to change your estimate that at least one probe saw the event, with probability 99%?"

I can't. If I claim that I'm making a justified update using this principle of reasoning, then I'm claiming I'm breaking the laws of physics into tiny bits using nothing but this super sketchy principle of reasoning. I'm going to keep my probabilities exactly the same- it turns out that this literally isn't evidence, at least to the no pop-up copies of me.

"That seems silly."

It does, but it seems less silly than breaking locality. I'm not going to update in ways that I don't think correspond with the evidence, and in this situation, while it seems that my update would correspond with the evidence, it can't.

"Well, why is it silly? You've shown that this plausible-sounding type of Anthropic Reasoning is wrong- but where did it go wrong? What if it works for other scenarios?"

Well, how did we decide it was plausible in the first place?  I have a sneaking suspicion that the root of this confusion is a questionable view of personal identity and questionable handling of reference classes. I'll write something about these in a later post... hopefully.

Pretty Normal?

Disclaimer: I'm going to say P = 1 and P = 0 for some things. This isn't actually true, of course- yada yada, the piping of flutes, reality could be a lie. At least, I wouldn't know if it were true. I don't think constantly keeping in mind the fact that we could all be in some Eldritch God's dream and the other sorts of obscure nonsense required for the discussed probabilities to be wrong is useful here, so go away. 

Also, yes, the universe is seemingly deterministic and that's in conflict with some of the wording here, but the idea still applies. Something something self locating uncertainty in many worlds.

Consider the following example:

If you survive, you're going to downgrade your probability that the cold war was dangerous. Seems clear enough...

But, wouldn't we always have updated in the direction of the cold war being less dangerous? Whenever we do this update, we're always going to update in the direction of the cold war being less dangerous- we need to have survived to perform the update. I'm never going to update against the probability of something having been safe in this way. Updates are only being done in one direction, regardless of the true layout of reality...

We already know how we're going to update.

Because of the weird selection effects behind our updates, we already know that whenever an update is being done, by these rules, we're updating towards the world having been safe.

"But, my errors are still decreasing on average, right?" 

You will have the lowest retrospective rates of inaccuracy if you conclude that, because you're never going to gather evidence contrary to that notion. You're not gaining evidence about the actual retrospective probability, though- the cold war could have had  and you'd make the exact same conclusion based on this particular evidence. 

Every sentient being throughout all of space and time is always deciding that things were safer, using this principle. This particular evidence is already perfectly correlated with you existing- you're not learning anything.

: "If we had died in an existential catastrophe, we wouldn't be around to observe it. Hence we can't conclude anything from our survival about the odds of existential risk."

Imagine that the world is either  (low risk of existential catastrophe) or  (high risk of existential catastrophe). Then  would argue that  is the same as : our survival provides no evidence of the world being safe.

 does not claim that  is talking about how we should reason given that we have observed that we have survived.

If you still think this is splitting hairs, consider the difference between 

 and 

, of course.

 claims that 

As far as our updates are concerned,

As a consequence, for the set X of all possibilities, at least from our perspective. The fact that we observe that we survived provides us no additional evidence for anything, from our perspective.

You hopefully see the problem.

It's not evidence that us surviving was likely or unlikely. It's not even evidence, at least to us.

: "If we had won the lottery, a lottery-losing us wouldn't be around to observe it. Hence we can't conclude anything from our loss about the odds of winning the lottery."

We do not always update in favor of the idea that we won the lottery- winning the lottery isn't as highly entangled with whether we're updating as whether or not we're still alive.

I can't predict how I'm going to update about how likely it was for me to win the lottery in advance of the lottery results. Whenever I win or lose the lottery, I'm learning something I couldn't have already incorporated into my priors (Well, I could have. But think about a Quantum lottery, perhaps)- something I didn't know before. Evidence.

A Broader Problem.

There's a broader problem with being surprised about the things you observe, but I'm not sure where to draw the line. Everyone reading this post should be unusually surprised about where they find themselves- you are in a very unusual reference class.

But... you are in a highly unusual reference class. You're not some magical essence plopped down in a randomized vessel- you're literally the algorithm that you are. You cannot be surprised by being yourself- it is highly unsurprising. What's the alternative? You couldn't have been someone else- you couldn't have been born after we develop AGI, or be an AGI yourself, or be an alien, or born in ancient Greece. Otherwise, you would be them, and not you. It doesn't make sense to talk about being something other than you- there is no probability going on.

There's a silly reverse side to this coin. "Well, you shouldn't be surprised you're finding yourself in this situation, no matter how improbable." isn't an explanation either. You can't use the fact that you shouldn't be surprised you're finding yourself in this situation to play defense for your argument.

If I claim that the LHC has a 50% chance of instantly destroying all of reality every time it causes a collision, you can't use "But that's so unlikely! That can't be the case, otherwise we most certainly wouldn't be here!" as an argument. It sounds like you should be able to- that gets the obviously right answer! Unfortunately, "The LHC is constantly dodging destroying all of reality" really is "indistinguishable" from what we observe. Well, that's not quite true- the two just can't be distinguished using that particular argument.

However, I also can't back my claim up by pointing that out. The fact that my claim is not technically incompatible with our observations is not substantial evidence.

In Conclusion:

Good Wholesome Cosmologist Anthropics is the idea that we shouldn't adopt a model which predicts that we don't exist. It's a specific case of the "We shouldn't adopt a model which predicts something which we know is not true" principle, which is itself a specific case of the "We should try to adopt accurate models" principle.

The arguments themselves might seem a little mysterious, but like Stuart Armstrong said, it's normal.

Also, be aware of its sister: "We shouldn't adopt models which predict that they are themselves wrong" a la Boltzmann Brains.

However, there's two more... interesting... veins of Anthropic Arguments to go mining. 

  1. Not Even Evidence- The Self Sampling Assumption, Self Indicating Assumption, and so on suffer from this.
  2. Recognizing that some stuff which seems like evidence is Not Even Evidence.
New Comment
15 comments, sorted by Click to highlight new comments since:
[-]ike20

Here's my version of your scenario. 

You send out one thousand probes that are all too far apart to have any effect on each other. Each probe flips a coin / selects a random quantum bit. If heads, it creates one billion simulations of you and tells each of them that it got heads. If tails, it creates one simulation of you and tells it that it got tails. And "you" as the person who sent the probes commits suicide right after launch, so you're not counted as part of this. 

Would you agree that this version exhibits the same paradoxical structure as yours, so I can analyze it with priors etc? If not, what would you prefer I change? I want hard numbers so I can actually get numerical output. 

This doesn't require faster than light signaling. If you and the copy are sent way with identical letters, that you open after crossing each other's event horizons. You learn want was packed with your clone when you open your letter. Which lets you predict what your clone will find.

Nothing here would require the event of your clone seeing the letter to affect you. You are affected by the initial set up. If the clone counterfactually saw something else, this wouldn't affect you according to SIA. It would require some assumptions about the setup to be wrong for that to happen to your clone though.

Another example would be if you learn a star that has crossed your cosmic event horizon was 100 solar masses, it's fair to infer that it will become a black hole and not a white dwarf.

This doesn't require faster than light signaling. If you and the copy are sent way with identical letters, that you open after crossing each other's event horizons. You learn want was packed with your clone when you open your letter. Which lets you predict what your clone will find.

Nothing here would require the event of your clone seeing the letter to affect you. You are affected by the initial set up.

Another example would be if you learn a star that has crossed your cosmic event horizon was 100 solar masses, it's fair to infer that it will become a black hole and not a white dwarf.

If you can send a probe to a location, radiation, gravitational waves, etc. from that location will also (in normal conditions) be intercepting you, allowing you to theoretically make pretty solid inferences about certain future phenomena at that location. However, we let the probe fall out of our cosmological horizon- information is reaching it that couldn't/can't have reached the other probes, or even the starting position of that probe.

In this setup, you're gaining information about arbitrary phenomena. If you send a probe out beyond your cosmological horizon, there's no way to infer the results of, for example, non-entangled quantum experiments.

I think we may eventually determine the complete list of rules and starting conditions for the universe/multiverse/etc. Using our theory of everything and (likely) unobtainable amounts of computing power, we could (perhaps) uniquely locate our branch of the universal wave function (or similar) and draw conclusions about the outcomes of distant quantum experiments (and similar). That's a serious maybe- I expect that a complete theory of everything would predict infinitely many different instances of us in a way that doesn't allow for uniquely locating ourselves.

However... this type of reasoning doesn't look anything like that. If SSA/SSSA require us to have a complete working theory of everything in order to be usable, that's still invalidating for my current purposes.

For the record, I ran into a more complicated problem which turns out to be incoherent for similar reasons- namely, information can only propagate in specific ways, and it turns out that SSA/SSA allows you to draw conclusions about what your reference class looks like in ways that defy the ways in which information can propagate. 

You are affected by the initial set up. If the clone counterfactually saw something else, this wouldn't affect you according to SIA.

This specific hypothetical doesn't directly apply to the SIA- it relies on adjusting the relative frequencies of different types of observers in your reference class, which isn't possible using SIA. SIA still suffers from the similar problem of allowing you to draw conclusions about what the space of all possible observers looks like.

[-]ike10

Can you formulate this as a challenge to SIA in particular? You claim that it affects SIA, but your issue is with reference classes, and SIA doesn't care about your reference class. 

Your probe example is confusingly worded. You include time as a factor but say time doesn't matter. Can you reduce it to the simplest possible that still yields the paradoxical result you want? 

>If you don't see a pop-up, and you think this somehow allows you to justifiably update in favor of no probes having seen <event x>

I don't think SIA says you should update in this manner, except very slightly. If I'm understanding your example correctly, all the probes end up tiling their light cones, so the number of sims is equal regardless of what happened. The worlds with fewer probes having seen x become slightly more likely than the prior, but no anthropic reasoning is needed to get that result. 

In general, I think of SIA as dictating our prior, while all updates are independent of anthropics. Our posterior is simply the SIA prior conditioned on all facts we know about our own existence. Roughly speaking, SSA represents a prior that we're equally likely to exist in worlds that are equally likely to exist, while SIA represents a prior that we're equally likely to be any two observers that are equally likely to exist. 

(Separately, I think a lot of this "existence" talk is misguided and we should be talking about probabilities in an expectations sense only, but that's not really relevant here.) 

Can you formulate this as a challenge to SIA in particular? You claim that it affects SIA, but your issue is with reference classes, and SIA doesn't care about your reference class. 

The point is that SIA similarly overextends its reach- it claims to make predictions about phenomena that could not yet have had any effect on your brain's operation, for reasons demonstrated with SSA in the example in the post.

Your probability estimates can only be affected by a pretty narrow range of stuff, in practice, and because SIA does not deliberately draw the line of all possible observers around "All possible observers which could have so far had impact on my probability estimates, as evidenced by the speed of light and other physical restrictions on the propagation of information", it unfortunately implies that your probability estimates are corresponding with things which, via physics, they can't be.

Briefly, "You cannot reason about things which could not yet have had an impact on your brain."

SSSA/SSA are more common, which is why I focused on them. For the record, I used an example in which SSSA and SSA predict exactly the same things. SIA doesn't predict the same thing here, but the problem that I gestured to is also present in SIA, but with a less laborious argument.

Your probe example is confusingly worded. You include time as a factor but say time doesn't matter. Can you reduce it to the simplest possible that still yields the paradoxical result you want? 

Yea, sorry- I'm still editing this post. I'll reword it tomorrow. I'm not sure if I'll remove that specific disclaimer, though.

We could activate the simulated versions of you at any time- whether or not the other members of your reference class are activated at  different times doesn't matter under standard usage of SIA/SSA/SSSA. I'm just including the extra information that the simulations are all spun up at the same time in case you have some weird disagreement with that, and in order to more closely match intuitive notions of identity.

I included that disclaimer because there's questions to be had about time- the probes are presumably in differently warped regions of spacetime, thus it's not so clear what it means to say these events are happening at the same time.

I don't think SIA says you should update in this manner, except very slightly. If I'm understanding your example correctly, all the probes end up tiling their light cones, so the number of sims is equal regardless of what happened. The worlds with fewer probes having seen x become slightly more likely than the prior, but no anthropic reasoning is needed to get that result. 

Only the probes which see <event x> end up tiling their light cones. The point is to change the relative frequencies of the members of your reference class. Because SSA/SSSA assume that you are randomly selected from your reference class, by shifting the relative frequencies of different future observations within your reference class SSA/SSSA imply you can gain information about arbitrary non-local phenomena. This problem is present even outside of this admittedly contrived hypothetical- this contrived hypothetical takes an extra step and turns the problem into an FTL telephone.

It doesn't seem that there's any way to put your hand on the scale of the number of possible observers, therefore (as previously remarked) this example doesn't apply to SIA. The notion that SIA is overextending its reach by claiming to make justified claims about things we can show (using physics) you cannot make justified claims about to still applies.

In general, I think of SIA as dictating our prior, while all updates are independent of anthropics. Our posterior is simply the SIA prior conditioned on all facts we know about our own existence. Roughly speaking, SSA represents a prior that we're equally likely to exist in worlds that are equally likely to exist, while SIA represents a prior that we're equally likely to be any two observers that are equally likely to exist. 

The problem only gets pushed back- we can also assert that your priors cannot be corresponding to phenomena which (up until now) have been non-local to you. I'm hesitant to say that you're not allowed to use this form of reasoning- in practice using SIA may be quite useful. However, it's just important to be clear that SIA does have this invalid implication.

[-]ike10

If you reject both the SIA and SSA priors (in my example, SIA giving 1/3 to each of A, B, and C, and SSA giving 1/2 to A and 1/4 to B and C), then what prior do you give?

Whatever prior you give you will still end up updating as you learn information. There's no way around that unless you reject Bayes or you assert a prior that places 0 probability on the clones, which seems sillier than any consequences you're drawing out here.

If you reject both the SIA and SSA priors (in my example, SIA giving 1/3 to each of A, B, and C, and SSA giving 1/2 to A and 1/4 to B and C), then what prior do you give?

I reject these assumptions, not their priors. The actual assumptions and the methodology behind them have physically incoherent implications- the priors they assign may still be valid, especially in scenarios where it seems like there are exactly two reasonable priors, and they both choose one of them.

Whatever prior you give you will still end up updating as you learn information. There's no way around that unless you reject Bayes or you assert a prior that places 0 probability on the clones, which seems sillier than any consequences you're drawing out here.

The point is not that you're not allowed to have prior probabilities for what you're going to experience. I specifically placed a mark on the prior probability of what I expected to experience in the "What if..." section.

If you actually did the sleeping beauty experiment in the real world, it's very clear that "you would be right most often when you woke up" if you said you were in the world with two observers.

[-]ike00

My formulation of those assumptions, as I've said, is entirely a prior claim. 

If you agree with those priors and Bayes, you get those assumptions. 

You can't say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you're just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones. 

I'm asking for your prior in the specific scenario I gave. 

My formulation of those assumptions, as I've said, is entirely a prior claim. 

You can't gain non-local information using any method, regardless of the words or models you want to use to contain that information. 

If you agree with those priors and Bayes, you get those assumptions. 

You cannot reason as if you were selected randomly from the set of all possible observers. This allows you to infer information about what the set of all possible observers looks like, despite provably not having access to that information. There are practical implications of this, the consequences of which were shown in the above post with SSA.

You can't say that you accept the prior, accept Bayes, but reject the assumption without explaining what part of the process you reject. I think you're just rejecting Bayes, but the unnecessary complexity of your example is complicating the analysis. Just do Sleeping Beauty with the copies in different light cones. 

It's not a specific case of sleeping beauty. Sleeping beauty has meaningfully distinct characteristics.

This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.

I'm asking for your prior in the specific scenario I gave. 

My estimate is 2/3rds for the 2-Observer scenario. Your claims that "priors come before time" makes me want to use different terminology for what we're talking about here. Your brain is a physical system and is subject to the laws governing other physical systems- whatever you mean by "priors coming before time" isn't clearly relevant to the physical configuration of the particles in your brain.

The fact that I execute the same Bayesian update with the same prior in this situation does not mean that I "get" SIA- SIA has additional physically incoherent implications.

[-]ike10

>This allows you to infer information about what the set of all possible observers looks like

I don't understand why you're calling a prior "inference". Priors come prior to inferences, that's the point. Anyway, there are arguments for particular universal priors, e.g. the Solomonoff universal prior. This is ultimately grounded in Occam's razor, and Occam can be justified on grounds of usefulness. 

>This is a real world example that demonstrates the flaws with these methods of reasoning. The complexity is not unnecessary.

It clearly is unnecessary - nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating. 

>SIA has additional physically incoherent implications

I don't see any such implications. You need to simplify and more fully specify your model and example. 

I don't understand why you're calling a prior "inference". Priors come prior to inferences, that's the point.

SIA is not isomorphic to "Assign priors based on Kolmogorov Complexity". If what you mean by SIA is something more along the lines of "Constantly update on all computable hypotheses ranked by Kolmogorov Complexity", then our definitions have desynced.

Also, remember: you need to select your priors based on inferences in real life. You're a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.

Regardless of whether your probabilities entered through your brain under the name of a "prior" or an "update", the presence of that information still needs to work within our physical models and their conclusions about the ways in which information can propagate.

SIA has you reason as if you were randomly selected from the set of all possible observers. This is what I mean by SIA, and is a distinct idea. If you're using SIA to gesture to the types of conclusions that you'd draw using Solomonoff Induction, I claim definition mismatch.

It clearly is unnecessary - nothing in your examples requires there to be tiling, you should give an example with a single clone being produced, complete with the priors SIA gives as well as your theory, along with posteriors after Bayesian updating. 

I specifically listed the point of the tiling in the paragraph that mentions tiling:

for you to agree that the fact you don't see a pink pop-up appear provides strong justified evidence that none of the probes saw <event x>

The point of that the tiling is, as I have said (including in the post), to manipulate the relative frequencies of actually existent observers strongly enough to invalidate SSA/SSSA in detail.

I don't see any such implications. You need to simplify and more fully specify your model and example. 

There's phenomena which your brain could not yet have been impacted by, based on the physical ways in which information propagates. If you think you're randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like, which is problematic.

I don't see any such implications. You need to simplify and more fully specify your model and example. 

Just to reiterate, my post isn't particularly about SIA. I showed the problem with SSA/SSSA- the example was specified for doing something else.

[-]ike10

>If what you mean by SIA is something more along the lines of "Constantly update on all computable hypotheses ranked by Kolmogorov Complexity", then our definitions have desynced.

No, that's what I mean by Bayesianism - SIA is literally just one form of interpreting the universal prior. SSA is a different way of interpreting that prior. 

>Also, remember: you need to select your priors based on inferences in real life. You're a neural network that developed from scatted particles- your priors need to have actually entered into your brain at some point.

The bootstrap problem doesn't mean you apply your priors as an inference. I explained which prior I selected. Yes, if I had never learned about Bayes or Solomonoff or Occam I wouldn't be using those priors, but that seems irrelevant here. 

>SIA has you reason as if you were randomly selected from the set of all possible observers.

Yes, this is literally describing a prior - you have a certain, equal, prior probability of "being" any member of that set (up to weighting and other complications). 

>If you think you're randomly drawn from the set of all possible observers, you can draw conclusions about what the set of all possible observers looks like

As I've repeatedly stated, this is a prior. The set of possible observers is fully specified by Solomonoff induction. This is how you reason regardless of if you send off probes or not. It's still unclear what you think is impermissible in a prior - do you really think one can't have a prior over what the set of possible observers looks like? If so, you'll have some questions about the future end up unanswerable, which seems problematic. If you specify your model I can construct a scenario that's paradoxical for you or dutchbookable if you indeed reject Bayes as I think you're doing. 

Once you confirm that my fully specified model captures what you're looking for, I'll go through the math and show how one applies SIA in detail, in my terms. 

[-]ike10

Only the probes which see <event x> end up tiling their light cones.

The version of the post I responded to said that all probes eventually turn on simulations. Let me know when you have an SIA version, please.

we can also assert that your priors cannot be corresponding to phenomena which (up until now) have been non-local to you

The up until now part of this is nonsense - priors come before time. Other than that, I see no reason to place such a limitation on priors, and if you formalize this I can probably find a simple counterexample. What does it even mean for a prior to correspond to a phenomena?

All SIA is doing is asserting events A, B, and C are equal prior probability. (A is living in universe 1 which has 1 observer, B and C are living in universe 2 with 2 observers and being the first and second observer respectively. B and C can be non-local.)

Briefly, "You cannot reason about things which could not yet have had an impact on your brain."

If you knew for a fact that something couldn't have had an impact, this might be valid. But in your scenarios, these could have had an impact, yet didn't. It's a perfectly valid update.

You should simplify to having exactly one clone created. In fact, I suspect you can state your "paradox" in terms of Sleeping Beauty - this seems similar to some arguments people give against SIA there, claiming one does not acquire new evidence upon waking. I think this is incorrect - one learns that one has woken in the SB scenario, which on SIA's priors leads one to update to the thirder position.

The version of the post I responded to said that all probes eventually turn on simulations. 

The probes which run the simulations of you without the pop-up run exactly one. The simulation is run "on the probe."

Let me know when you have an SIA version, please.

I'm not going to write a new post for SIA specifically- I already demonstrated a generalized problem with these assumptions.

The up until now part of this is nonsense - priors come before time. Other than that, I see no reason to place such a limitation on priors, and if you formalize this I can probably find a simple counterexample. What does it even mean for a prior to correspond to a phenomena?

Your entire brain is a physical system, it must abide by the laws of physics. You are limited on what your priors can be by this very fact- there is some stuff that the position of the particles in your brain could not have yet been affected by (by the very laws of physics).

The fact that you use some set of priors is a physical phenomenon. If human brains acquire information in ways that do not respect locality, you can break all of the rules, acquire infinite power, etc.

Up until now refers to the fact that the phenomena have, up until now, been unable to affect your brain.

I wrote a whole post trying to get people to look at the ideas behind this problem, see above. If you don't see the implication, I'm not going to further elaborate on it, sorry.

All SIA is doing is asserting events A, B, and C are equal prior probability. (A is living in universe 1 which has 1 observer, B and C are living in universe 2 with 2 observers and being the first and second observer respectively. B and C can be non-local.)

SIA is asserting more than events A, B, and C are equal prior probability.

Sleeping Beauty and these hypotheticals here are different- these hypotheticals make you observe something that is unreasonably unlikely in one hypothesis but very likely in another, and then show that you can't update your confidences in these hypothesis in the dramatic way demonstrated in the first hypothetical.

You can't change the number of possible observers, so you can't turn SIA into an FTL telephone. SIA still makes the same mistake that allows you to turn SSA/SSSA into FTL telephones, though. 

If you knew for a fact that something couldn't have had an impact, this might be valid. But in your scenarios, these could have had an impact, yet didn't. It's a perfectly valid update.

There really couldn't have been an impact. The versions of you that wake up and don't see pop-ups (and their brains) could not have been affected by what's going on with the other probes- they are outside of one another's cosmological horizon. You could design similar situations where your brain eventually could be affected by them, but you're still updating prematurely.

I told you the specific types of updates that you'd be allowed to make. Those are the only ones you can justifiably say are corresponding to anything- as in, are as the result of any observations you've made. If you don't see a pop-up, not all of the probes saw <event x>, your probe didn't see <event x>, you're a person who didn't see a pop-up, etc. If you see a pop-up, your assigned probe saw <event x>, and thus at least one probe saw <event x>, and you are a pop-up person, etc.

However, you can't do anything remotely looking like the update mentioned in the first hypothetical. You're only learning information about your specific probe's fate, and what type of copy you ended up being.

You should simplify to having exactly one clone created. In fact, I suspect you can state your "paradox" in terms of Sleeping Beauty - this seems similar to some arguments people give against SIA there, claiming one does not acquire new evidence upon waking. I think this is incorrect - one learns that one has woken in the SB scenario, which on SIA's priors leads one to update to the thirder position.

You can't simplify to having exactly one clone created. 

There is a different problem going on here than in the SB scenario. I mostly agree with the 1/3rds position- you're least inaccurate when your estimate for the 2-Observer scenario is 2/3rds. I don't agree with the generalized principle behind that position, though. It requires adjustments, in order to be more clear about what it is you're doing, and why you're doing it.

[-]ike10

>The fact that you use some set of priors is a physical phenomenon.

Sure, but irrelevant. My prior is exactly the same in all scenarios - I am chosen randomly from the set of observers according to the Solomonoff universal prior. I condition based on my experiences, updating this prior to a posterior, which is Solomonoff induction. This process reproduces all the predictions of SIA. No part of this process requires information that I can't physically get access to, except the part that requires actually computing Solomonoff as it's uncomputable. In practice, we approximate the result of Solomonoff as best we can, just like we can never actually put pure Bayesianism into effect. 

Just claiming that you've disproven some theory with an unnecessarily complex example that's not targeted towards the theory in question and refusing to elaborate isn't going to convince many. 

You should also stop talking as if your paradoxes prove anything. At best, they present a bullet that various anthropic theories need to bite, and which some people may find counter-intuitive. I don't find it counter-intuitive, but I might not be understanding the core of your theory yet. 

>SIA is asserting more than events A, B, and C are equal prior probability.

Like what? 

I'm going to put together a simplified version of your scenario and model it out carefully with priors and posteriors to explain where you're going wrong.