It's a standard assumption, in anthropic reasoning, that effectively, we simultaneously exist in every place in Tegmark IV that simulates this precise universe (see e. g. here).
How far does this reasoning go?
Suppose that the universe's state is described by low-level variables . However, your senses are "coarse": you can only view and retain the memory of variables , where and each is a deterministic function of some subset of .
Consider a high-level state , corresponding to each being assigned some specific value. For any , there's an equivalence class of low-level states precisely consistent with .
Given this, if you observe , is it valid to consider yourself simultaneously existing in all corresponding low-level states consistent with ?
Note that, so far, this is isomorphic to the scenario from Nate's post, which considers all universes that only differ by the choices of gauge (which is undetectable from "within" the system) equivalent.
Now let's examine increasingly weirder situations based on the same idea.
Scenario 1:
- Consider two Everett branches, and . They only differ by the exact numbers of photons in your room: has an extra photon.
- Suppose that we gave the entire history of your observations over your lifetime to AIXI, which simulates all universes consistent with your observations. Suppose that, in the end, it's only able to narrow it down to " or ".
- Does that mean you currently simultaneously exist in both branches?
- Importantly, note that the crux here isn't whether you, a bounded agent, are able to consciously differentiate between and .
- That is: Suppose that, in , the extra photon hits your eye and makes you see a tiny flash of red. If so, then, even though you likely won't make any conscious inferences about the photons, that'd still create a difference between the sensory streams, which AIXI (an unbounded computation) would be able to use to distinguish between and .
- Similarly, if the existence of the extra photon causes a tiny divergence ten years down the line, which will lead to a different photon hitting your eye and your seeing a tiny red flash, this will likewise create a difference that AIXI would be able to use.
- But if there's never even a bit of difference between your sensory streams, are and equivalent for the purpose of whether you exist in them?
I'm inclined to bite this bullet: yes, you exist in all universes consistent with your high-level observations, even if their low-level states differ.
Scenario 2: if you absolutely forget a detail, would the set of the universes you're embedded in increase? Concretely:
- Similar setup as before: an extra photon, you see a tiny red flash, but then forget about it. In the intermediate time that you perceived and remembered it, you've taken no actions that made a divergence between and , and your neural processes erased the memory near-completely, such that the leftover divergence between and will likewise never register to your conscious senses.
- AIXI, if fed the contents of your mind pre-photon, would only narrow it down to -or-. If fed the contents of your mind while you're remembering the flash, it'd be able to distinguish between and . If fed the contents post-forgetting, we're back to indistinguishability between and .
- So: does that mean you existed in both and before observing the photon, then got "split" into an -self and a -self, and then "merged back" once you forgot?
I'm inclined to bite this bullet too, though it feels somewhat strange. Weird implication: you can increase the amount of reality-fluid assigned to you by giving yourself amnesia.[1]
Scenario 3: Now imagine that you're a flawed human being, prone to confabulating/misremembering details, and also you don't hold the entire contents of your memories in your mind all at the same time. If I ask you whether you saw a small red flash 1 minute ago, and you confirm that you did, will you end up in a universe where there's an extra photon, or in a universe where you've confabulated this memory? Or in both?
Scenario 4: Suppose you observe some macro-level event, such as learning that there are 195 countries in the world. Suppose there are similar-ish Everett branches where there's only 194 internationally recognized countries. This difference isn't small enough to get lost in thermal noise. The existence vs. non-existence of an extra country doubtlessly left countless side-evidence in your conscious memories, such that AIXI would be able to reconstruct the country's (non-)existence even if you're prone to forgetting or confabulating the exact country-count.
... Or would it? Are you sure that the experiential content you're currently perceiving, and the stuff currently in your working memory, anchor you only to Everett branches that have 195 countries?
Sure, if you went looking through your memories, you'd doubtlessly uncover some details that'd be able to distinguish a branch where you confabulated an extra country with a branch where it really exists. But you haven't been doing that before reading the preceding paragraphs. Was the split made only when you started looking? Will you merge again, once you unload these memories?
This setup seems isomorphic, in the relevant sense, to the initial setup with only perceiving high-level variables . In this case, we just model you as a system with even more "coarse" senses.[2] Which, in turn, is isomorphic to the standard assumption of simultaneously exist in every place in Tegmark IV that simulates this precise universe.
One move you could make, here, is to claim that "you" only identify with systems that have some specific personality traits and formative memories. As a trivial example, you could claim that a viewpoint which is consistent with your current perceptions and working-memory content, but who, if they query their memories for their name, and then experience remembering "Cass" as the answer, is not really "you".
But then, presumably you wouldn't consider "I saw a red flash one minute ago" part of your identity, else you'd consider naturally forgetting such a detail a kind of death. Similarly, even some macro-scale details like "I believe there are 195 countries in the world" are presumably not part of your identity. A you who confabulated an extra country is still you.
Well, I don't think this is necessarily a big deal, even if true. But it's relevant to some agent-foundation work I've been doing, and I haven't seen this angle discussed before.
The way it can matter: Should we expect to exist in universes that abstract well, by the exact same argument that we use to argue that we should expect to exist in "alt-simple" universes?
That is: suppose there's a class of universes in which the information from the "lower levels" of abstraction becomes increasingly less relevant to higher levels. It's still "present" on a moment-to-moment basis, such that an AIXI which retained the full memory of an embedded agent's sensory stream would be able to narrow things down to a universe specified up to low-level details.
But the actual agents embedded in such universes don't have such perfect memories. They constantly forget the low-level details, and presumably "identify with" only high-level features of their identity. For any such agent, is there then an "equivalence class" of agents that are different at the low level (details of memories/identity), but whose high-level features match enough that we should consider them "the same" agent for the purposes of the "anthropic lottery"?
For example, suppose there are two Everett branches that differ by whether you saw a dog run across your yard yesterday. The existence of an extra dog doubtlessly left countless "microscopic" traces in your total observations over your lifetime: AIXI would be able to tell the universes apart. But suppose our universe is well-abstracting, and this specific dog didn't set off any butterfly effects. The consequences of its existence were "smoothed out", such that its existence vs. non-existence never left any major differences in your perceptions. Only various small-scale details that you forgot/don't matter.
Does it then mean that both universes contain an agent that "counts as you" for the purposes of the "anthropic lottery", such that you should expect to be either of them at random?
If yes, then we should expect ourselves to be agents that exist in a universe that abstracts well, because "high-level agents" embedded in such universes are "supported" by a larger equivalence class of universes (since they draw on reality fluid from an entire pool of "low-level" agents).
So: are there any fatal flaws in this chain of reasoning? Undesirable consequences to biting all of these bullets that I'm currently overlooking?
- ^
Please don't actually do that.
- ^
As an intuition-booster, imagine that we implemented some abstract system that got only very sparse information about the wider universe. For example, a chess engine. It can't look at its code, and the only inputs it gets are the moves the players make. If we imagine that there's a conscious agent "within" the chess engine, the only observations of which are the chess moves being made, what "reason" does it have to consider itself embedded in our universe specifically, as opposed to any other universe in which chess exists? Including universes with alien physics, et cetera.
An important question here is "what is the point of being 'more real'?". Does having a higher measure give you a better acausal bargaining position? Do you terminally value more realness? Less vulnerable to catastrophes? Wanting to make sure your values are optimized harder?
I consider these, except for the terminal sense, to be rather weak as far as motivations go.
Acausal Bargaining: Imagine a bunch of nearby universes with instances of 'you'. They all have variations, some very similar, others with directions that seem a bit strange to the others. Still identifiably 'you' by a human notion of identity. Some of them became researchers, others investors, a few artists, writers, and a handful of CEOs.
You can model these as being variations on some shared utility function: U+αi where U is shared, and αi is the individual utility function. Some of them are more social, others cynical, and so on. A believable amount of human variation that won't necessarily converge to the same utility function on reflection (but quite close).
For a human, losing memories so that you are more real is akin to each branch chopping off the αi. They lose memories of a wonderful party which changed their opinion of them, they no longer remember the horrors of a war, and so on.
Everyone may do the simple ask of losing all their minor memories which has no effect on the utility function, but then if you want more bargaining power, do you continue? The hope is that this would make your coalition easier to locate, to be more visible in "logical sight". That this increased bargaining power would thus ensure that, at the least, your important shared values are optimized harder than they could if you were a disparate group of branches.
I think this is sometimes correct, but often not.
From a simple computationalist perspective, increasing the measure of the 'overall you' is of little matter. The part that bargains, your rough algorithm and your utility function, is already shared: U is shared among all your instances already, some of you just have considerations that pull in other directions (αi). This is the same core idea of the FDT explanation of why people should vote: because, despite not being clones of you, there is a group of people that share similar reasoning as you. Getting rid of your memories in the voting case does not help you!
For the Acausal Bargaining case, there is presumably some value in being simpler. But, that means more likely that you should bargain 'nearby' to present a computationally cheaper value function 'far away'. So, similar to forgetting, where you appear as if having some shared utility function, but without actually forgetting—and thus being able to optimize for αi in your local universe. As well, the bargained utility function presented far away (less logical sight to your cluster of universes) is unlikely to be the same as U.
So, overall, my argument would be that forgetting does give you more realness. If at 7:59AM, a large chunk of universes decide to replace part of their algorithm with a specific coordinated one (like removing a memory) then that algorithm is instantiated across more universes. But, that from a decision-theoretic perspective, I don't think that matters too much? You already share the important decision theoretic parts, even if the whole algorithm is not shared.
From a human perspective we may care about this as a value of wanting to 'exist more' in some sense. I think this is a reasonable enough value to have, but that it is oft satisfied by considering the sharing of decision methods and 99.99% of personality is enough.
My main question of whether this is useful beyond a terminal value for existing more is about quantum immortality—of which I am more uncertain about.