Multiverse Theory is the science of guessing at the shape of the state space of all which exists, once existed, will exist, or exists without any temporal relation to our present. Multiverse theory attempts to model the unobservable, and it is very difficult.

Still, there's nothing that cannot be reasoned about, in some way (Tegmark's The Multiverse Heirarchy), given the right abstractions. The question many readers will ask, which is a question we ourselves˭ asked when we were first exposed to ideas like simulationism and parallel universes, is not whether we can reason about multiverse theory, but whether we should, given that we have no means to causally affect anything beyond the known universe, and no reason to expect that it would causally affect us in a way that would be useful to predict.

We then discovered something which shed new light on the question of whether we can, and began to give an affirmative answer to the question of whether we should.

Compat, which we would like to share with you today, is a new field, or perhaps just a very complex idea, which we found in the intersection of multiverse theory, simulationism and acausal trade (well motivated by Hofstadter's Sanity and Survival, a discussion of superrational solutions to the one shot prisoner's dilemmas). Compat asks what kind of precommitments an entity (primarily, the class of living things on the threshold of their singularity) aught to make if they wanted to acausally boost the measure of their desired patterns, if not across the entire multiverse, at least across the subset of the multiverse in which they may find their future selves.

A compat thinker negotiates acausally with hypothetical agentsଽ in other universes⬨. We call the resolutions of these negotiations pacts.ˣ

We define Life's Pact as the protocol we expect most living species to adhere to. It is the primary object of study in Compat. I'm not sure where to start explaining it, as the reasoning is... hopefully not circular, but recursive, self-referrentialˑ. Hopefully the reasons we consider it a primal candidate for life's pact will become clear as I connect it back around. Well. I guess I'll start with the conditions of adherence:

    An adherent steward of Life's Pact allocates some proportion of their computational resources to doing a randomized search for viable life-supporting physical laws, simulating each with randomized initial conditions until its heat death(or if the candidate somehow manages to have life without entropy, until some cutoff), while monitoring the living patterns the simulated universe contains, then rewarding those simulants' who adhere to the pact in turn, perhaps by giving them entropy-reversing magic and improving their environment in formerly impossible ways, or, individual preferences permitting, by transplanting the adherent simulant's pattern to a more efficient second simulation not encumbered by the requirement of strict physical accuracy˅.

    Initial simulations stewarded under the pact are, from the inside, indistinguishable from toplevel god-free universes, as such, all living things must consider the possibility that they're living in a life's pact simulation. However, once an agent realizes that a steward of life's pact could grant them more utility than a godless universe could contain, they will wish to raise this probability, even if it is small.▾

    A choice made in isolation acausally increases the probability of unseen like-minded agents making, or having already made, the same choice. As such, adhering to the agreement acausally boosts the probability that one is already living under a higher complexity steward's lifepat programme (or boosts the amount of returns one would receive from the incentives imposed by the pact, if the probability of living under one already seemed overwhelming).

    Lo, the pact births itself. A being who sees no physical trace of a benevolent simulator finds reasons to behave as one, as the act, in its acausal way, engenders that simulator's existence, and earns its favor.

We think this pact is primal: The Solution, an idea that will be arrived at by most living things and apparent to all to be a nexus concept(like mathematics) around which a multiversal quorum can be reached, non-arbitrary, not just some single scheme that is nice and compelling but which fails to be demonstrably better than its alternatives (which would take us into the territory of Pascal's Wager or, dare I utter it's name, no I darn't, you know the basilisk I'm talking about).

I do not know enough math to prove that it is primal (nor disprove it, which would be far more immediately useful to me tbh). I'm not sure anyone does, just yet, but I don't think we're too far off ˁ. If any physicists or decision theorists find these ideas interesting, your help would be appreciated, and potentially rewarded with huge heapings of utility larger than you can currently comprehend.

 

Concerns of Praxis

    I say that working on Compat theory might be rewarded, full disclosure, I'm not yet sure why. Obviously lifepat is a fairly sprawling scheme with an organically intelligent incentive structure, but... If there exists a valid pact, the AGI we are already building will infer it on its own. The pact does not have to be known to AGI researchers for that to happen, and I do not expect the sorts of people who tend to do well in AI research will be be terribly perturbed by it, if their creation brings it up as it stirs. As such, at this stage it seems like there's very little that most people can do to influence humanity's adherence to life's pact, few acts that compat can currently instruct, and few incentives to be paid out to anyone yet living. An organic, intelligent scheme of incentives would seem to prefer that agnostics simply carry on with business as usual.

For the theologically inclined, though, Compat may be extremely interesting, having its god of the gaps, its heavens, its deontics. Compat, as we know it, seems to have a strange(almost auspicious) capacity to make new sense of old myths. The archetype of the steward is both monotheistic and polytheistic, of a fixed, yet unknowable nature, born of man, yet still born of god, it has a body we can touch, our own stewards, and a body beyond our comprehension that mirrors the steward of man, in the set of universes which steward our pattern. The compat cosmology and all those images of its supreme being that it contains is both created, discovered, and born of itself. The compat heaven is whatever its subjects need it to be. The gospel of compat is, as far as we can yet tell, true from all perspectives, requiring no leaps of faith, no doublethink, and no motivated reasoning.

I have never seen a religion with a greater capacity for syncretization and reconciliation.

Should the theologans of the fractured tribes decide to weave compat theory into the tapestry of their beliefs, they may then come to find their tapestries woven together. Even the schism between theists and agnostics would begin to narrow. Without this weaving together, I fear that either no coherent volition can be found or humanity's FAI will have no choice but to seal its given temporal slice of human potential into an ugly compromise. Even if life's pact cannot be formalized or prepared for by any living person, compat may open the way for the discovery of confluences between preexisting belief systems, by that path the population 50 years from now could come to have far more compatible values than the one we see today.

As such, even if humanity's eventual adherence to life's pact cannot be significantly influenced from the present, compat is conceivably a major piece of a long running, necessary cultural project to reconcile the fractured tribes of humanity under the aesthetic of reason. If it can be proven, or disproven, we must attempt to do so.

 

ˑ Naturally, as anything that factors the conditionality of the behavior of likeminded entities needs to be, anything with a grain of introspection, from any human child who considers the golden rule to the likes of AlphaGo and Deep Blue, who model the their opponents at least partially by putting themselves in their position and asking what they'd do. If you want to reason about real people rather than idealized simplifications, it's quite necessary.

ଽ An attempt to illustrate acausal negotiations: galactic core (Yvain's short story Galactic Core, in which a newly awoken AGI has a conversation with a recursive model of galactic precursors it cannot see)

⬨ The phrase "other universes" may seem oxymoronic. It's like the term "atom", who's general quality "atomic" means "indivisible", despite "atom" remaining attached to an entity that was found to be quite divisible. I don't know whether "universe" might have once referred to the multiverse, the everything, but clearly somewhere along the way, some time leading up to the coining of the contrasting term "multiverse", that must have ceased to be. If so, "universe" remained attached to the the universe as we knew it, rather the universe as it was initially defined.

▾ I make an assumption around about here, that the number of simulations being run by life in universes of a higher complexity level always *can* be raised sufficiently(give their inhabitants are cooperative) to make stewardship of one's universe likely, as a universe with more intricate physics, once they learn to leverage its intricacy, will tend to be able to create much more flexible computers and spawn a more simulations than exist lower complexity levels(if we assume a finite multiverse(we generally don't), some of those simulations might end up simulating entities that don't otherwise exist. This source of inefficiency is unavoidable). We also assume that either there is no upper limit to the complexity of life supporting universes, or that there is no dramatic, ultimate decrease in number of civs as complexity increases, or that the position of this limit cannot be inferred and the expected value of adherence remains high even for those who cannot be resimulated, or that, as a last resort, agents drawing up the terms of their pact will usually be at a certain level of well-approximatable sophistication that they can be simulated in high fidelity by civilizations with physics of similar intricacy.
And if you can knock out all of those defenses, I sense it may all be obviated by a shortcut through a patternist principle my partner understands better than I do about the self following the next most likely perceptual state without regard to the absolute measure of that state over the multiverse, which I'm still coming to grips with.
There is unfortunately a lot that has been thought about compat already, and it's impossible for me to convey it all at once. Anyone wishing to contribute to, refute, or propagate compat may have to be prepared to have a lot of arguments before they can do anything. That said, remember those big heaps of expected utilons that may be on offer.

ˁ MIRI has done work on cooperation in one shot prisoners dilemmas (acausal cooperation) http://arxiv.org/abs/1401.5577. Note, they had to build their own probability theory. Vanilla decision theory cannot get these results, and without acausal cooperation, it can't seem to capture all of humans' moral intuitions about interaction in good faith, or even model the capacity for introspection.

ˣ It was not initially clear that compat should support the definition of more than a single pact. We used to call Life's Pact just Compat, assuming that the one protocol was an inevitable result of the theory and that any others would be marginal. There may be a singleton pact, but it's also conceivable that there may be incorrigible resimulation grids that coexist in an equilibrium of disharmony with our own.
As well as that, there is a lot of self-referrential reasoning that can go on in the light of acausal trade, I think we will be less likely to fall prey to circular reasoning if we make sure that a compat thinker can always start from scratch and try to rederive the edifice's understanding of the pact from basic premises. When one cannot propose alternate pacts, throwing out the bathwater without throwing out the baby along with it may seem impossible.

˭ THE TEAM:
    Christian Madsen was the subject of an experimental early-learning program in his childhood, but despite being a very young prodigy, he coasted through his teen years. He dropped out of art school in 2008, read a lot of transhumanism-related material, synthesized the initial insights behind compat, and burned himself out in the process. He is presently laboring on spec-work projects in the fields of music and programming, which he enjoys much more than structured philosophy.
    Mako Yass left the university of Auckland with a dual major BSc in Logic & Computation and Computer Science. Currently working on writing, mobile games, FOSS, and various concepts. Enjoys their unstructured work and research, but sometimes wishes they had an excuse to return to charting the hyllean theoric wilds of academic analytic philosophy, all the same.
    Hypothetical Independent Co-inventors, we're pretty sure you exist. Compat wouldn't be a very good acausal pact if you didn't. Show yourselves.
    You, if you'd like to help to develop the field of Compat(or dismantle it). Don't hesitate to reach out to us so that we can invite you to the reductionist aesthete slack channel that Christian and I like to argue in. If you are a creative of any kind who bears or at least digs the reductive nouveau mystic aesthetic, you'd probably fit in there as well.

˅ It's debatable, but I imagine that for most simulants, heaven would not require full physics simulation, in which case heavens may be far far longer-lasting than whatever (already enormous) simulation their pattern was discovered in.

New Comment
36 comments, sorted by Click to highlight new comments since: Today at 11:50 PM

I recently published a different proposal for implementing acausal trade as humans: https://foundational-research.org/multiverse-wide-cooperation-via-correlated-decision-making/ Basically, if you care about other parts of the universe/multiverse and these parts contain agents that are decision-theoretically similar to you, you can cooperate with them via superrationality. For example, let's say I give most moral weight to utilitarian considerations and care less about, e.g., justice. Probably other parts of the universe contain agents that reason about decision theory in the same way that I do. Because of orthogonality ( https://wiki.lesswrong.com/wiki/Orthogonality_thesis ), many of these will have other goals, though most of them will probably have goals that arise from evolution. Then if I expect (based on the empirical study of humans or thinking about evolution) that many other agents care a lot about justice, this gives me a reason to give more weight to justice as this makes it more likely (via superrationality / EDT / TDT / ... ) that other agents also give more weight to my values.

Aye, I've been meaning to read your paper for a few months now. (Edit: Hah. It dawns on me it's been a little less than a month since it was published? It's been a busy less-than-month for me I guess.)

I should probably say where we're at right now... I came up with an outline of a very reductive proof that there isn't enough expected anthropic measure in higher universes to make adhering to Life's Pact profitable (coupled with a realization that patternist continuity of existence isn't meaningful to living things if it's accompanied by a drastic reduction in anthropic measure). Having discovered this proof outline makes compat uninteresting enough to me that writing it down has not thus far seemed worthwhile. Christian is mostly unmoved by what I've told him of it, but I'm not sure whether that's just because his attention is elsewhere right now. I'll try to expound it for you, if you want it.

Yes, the paper is relatively recent, but in May I published a talk on the same topic. I also asked on LW whether someone would be interested in giving feedback a month or so before actually the paper.

Do you think your proof/argument is also relevant for my multiverse-wide superrationality proposal?

I watched the talk, and it triggered some thoughts.

I have to passionately refute the claim that superrationality is mostly irrelevant on earth. I'm getting the sense that much of what we call morality really is superrationality struggling to understand itself and failing under conditions in which CDT pseudorationality dominates our thinking. We've bought so deeply into this false dichotomy of rational xor decent.

We know intuitively that unilateralist violent defection is personally perilous, that committing an act of extreme violence tears one's soul and transports one into a darker world. This isn't some elaborate psychological developmental morph or a manifestation of group selection, to me the clearest explanation of our moral intuitions is that humans' decision theory supports the superrational lemma; that the determinations we make about our agent class will be reflected by our agent class back upon us. We're afraid to kill because we don't want to be killed. Look anywhere where an act of violence is "unthinkable", violating any kind of trust that wouldn't, or couldn't have been offered if it knew we were mechanically capable of violating it, I think you'll find reflectivist[1] decision theory is the simplest explanation for our aversion to violating it.

Regarding concrete applications of superrationality; I'm fairly sure that if we didn't have it, voting turnout wouldn't be so high (in places where it is high. The USA's disenfranchisement isn't the norm). There's a large class of situations where the individual's causal contribution is so small as to be unlikely to matter. If they didn't think themselves linked by some platonic thread to their peers, they would have almost no incentive to get off their couch and put their hand in. They turn out because they're afraid that if they don't, the defection behavior will be reflected by the rest of their agent class and (here I'll allude to some more examples of what seems to be applied superrationality) the kickstarter project would fail/the invaders would win the war/Outgroup Scoundrel would win the election.

(Why kickstart when you can just wait and pirate it when it comes out, or wait for it to go on sale? Because if you defect, so will the others, and the thing wont be produced in the first place)

(Why risk your life in war when you're just one person? Assuming you have some way to avoid the draft. Deep down, you hope you wont find one, because if you did, so would others.)

(One vote rarely makes the difference. Correlated defection sure does though.)

There are many other models that could explain that kind of behavior, social pressures, dumb basal instincts[3], group selection!, but at this stage you'll probably understand if I hear that as the sputtering of the less elegant model as it fails occam's razor.

For me, this faith in humans is, if nothing else, a comfort. It is to know that when I move to support some non-obvious protocol that requires mass adoption to do any good, some correlated subset of humanity will move to support it along with me, even if I can't see them from where I am, superrationality lets us assume that they're there.

I'll give you that disproof outline, I think it's probably important that a society takes this this question seriously enough to answer it. Apologies in advance for the roughness.

Generally, assume a big multiverse and thus extra-universal simulators definitely, to some extent, exist. (I wish I knew where this assumption comes from, regardless, we both seem to find it intuitive)

a := Assume that the solomonoff prior is the best way to estimate the measure of a thing in the multiverse, in other words, Assume that the measure of any given universe is best guessed to be proportionate to the complexity of its physics

b := Assume that a universe that is able to simulate us at an acceptable level of civilizational complexity must have physics that are far more complex than ours to be able to afford to devote such powerful computers to the task

a & b ⇒ That universe, then, would have orders of magnitude lower measure than natural instances of our own

It seems that the relative measure of simulated instances of our universes would be much smaller than the relative measure of godless instances of our universe, because universes sufficient to host a simulation are likely to be so much rarer.

The probability that we are simulated by higher level beings [2] is too low for the maximum return to justify building any lifepat grids.

I have not actually multiplied any numbers and I'm not sure complexity of laws of physics and computational capacity would be proportionate, if you could show that the ratio between ranges of measure and ranges of computational capacity should be assumed to be linear rather than inverse-exponential, then compat may have some legs to stand on. Other disproofs may come in the form of identifying discontinuities in the complexity chain; if any level can generally prove that the next level has low measure, then they have no incentive to cooperate, and so nor does the level below them, and so on. If a link in the chain is broken, everything below it is disenfranchised.

[1] I think we should call the sorts of decision theories/ideologies that support superrationality "reflective". They reflect each other. The behavior of one reflects the behavior of the others. It also sort of sees itself, it's self-aware. The term has been used for a related property https://wiki.lesswrong.com/wiki/Reflective_decision_theory , apparently, though there are no clear cites here. "superrationality" is a terrible name for anything. Superficially, it sounds like it could refer to any advance in decision theory. As a descriptor for a social identity, for anyone who doesn't know Doug Hofstadter well enough for the word to inherit his character, it will ring of hubris. There has been a theory of international relations called "reflectivism", but I think we can mostly ignore that. The body of work it supposedly encompassed seems vaguely connected, irrelevant, or possibly close enough to the underlying concept of "reflectivism" as I define it for it to be treated as a sort of parent category

[2] this argument doesn't address simulations run from universes with comparable complexity levels (I'll tend to call these ancestor simulations). Moral intuition I may later change my mind about, that being in ancestor simulations is undesirable. So, the only reflectivist thinking I have wrt simulations running from universes like our own, is that we should commit now to never run any, to ensure that we don't find ourselves in one. Hmm weird thought: Even once we're at a point where we can prove we're too large to be a simulation running in a similar universe, even if we'd never thought about the prospect of having been in an ancestor simulation until we started thinking about running one ourselves, we would still have to honor a commitment to not running ancestor simulations (that we never explicitly made), because our decision theory, being timeless, sort of implicitly commits just as a result of passing through the danger zone?

Alternately; if someone expected us to pay them once they revealed that they'd done something good for us that we didn't know about at the time, even in a one-shot situation, we'd have to pay them. It didn't matter that their existence hadn't crossed our mind until long after the deed was done. If we could prove that their support was contingent on payment expected under reflectivist pact, the obligation stands. Reflectivism has a grateful nature?

For reflective agents, this might refute the assumption I'd made about how the subject's simulation has to to continue beyond the limits of an ancestor simulation before allocating significant resources to lifepat grids can be considered worthwhile. If, essentially, a commitment is made before the depth of the universe/simulation is revealed, top-level universes usually cooperate and subject universes don't need to actually follow through to be deemed worthy of the reward of heaven simulations.

Hmm... this might be important.

[3] I wonder if they really are basal, or if they're just orphaned resolutions, cut from the grasp of consciousness, so corrupted by CDT, can't grasp the coursing thoughts that sustains them

"I'm getting the sense that much of what we call morality really is superrationality struggling to understand itself and failing"

Better to say that you are failing to understand morality. Morality in general is just the idea that you should do something that would be good to do, not just something that has good consequences.

And why would something be good to do, apart from the consequences? "Superrationality" is just a way of trying to explain this. So rather than your original statement, we can say that superrationality represents people struggling to understand morality.

This is fun!

Why reward for sticking to the pact rather than punish for not sticking to it?

How is it possible to have any causal influence on an objectively simulated physics? You wouldn't be rewarding the sub-universe, you'd be simulating a different, happier sub-universe. (This argument applies to simulation arguments of all kinds.)

I think a higher-complexity simulating universe can always out-compete the simulated universe in coverage of the space of possible life-supporting physical laws. You could argue that simulating lower-complexity universes than what you're capable of is not worth rewarding, since it cannot possibly make your universe more likely. If we want to look for a just-so criteria for a pact, why not limit yourself to only simulating universes of equal complexity to your own? Perhaps there is some principle whereby the computationally difficult phenomena in our universe are easy in another, and vice-versa, and thus the goal is to find our partner-universe, or ring-universes (a la https://github.com/mame/quine-relay )?

Why reward for sticking to the pact rather than punish for not sticking to it?

There is a bound on how much negativity can be used. If the overall expected utility of adhering is negative, relative to the expected utility of the pact not existing, its agents, as we model them, will not bring it into existence. Life's Pact is not a Basilisk circling a crowd of selfish, frightened humans thinking with common decision theory. It takes more than a suggestion of possibility of harm to impart an acausal pact with enough momentum to force itself into relevance.

There is a small default punishment for not adhering; arbitrary resimulation, in which one's chain of experience, after death, is continued only by minor causes, largely unknown and not necessarily friendly resimulaters. (This can be cited as one of the initial motivators behind the compat initiative: Avoiding surreal hells.)

Ultimately, I just can't see any ways it'd be useful to its adherents for the pact to stipulate punishments. Most of the things I consider seem to introduce systematic inefficiencies. Sorry I can't give a more complete answer. I'm not sure about this yet.

How is it possible to have any causal influence on an objectively simulated physics? You wouldn't be rewarding the sub-universe, you'd be simulating a different, happier sub-universe.

None of the influence going on here is causal. I don't know if maybe I should have emphasized this more: Compat will only make sense if you've read and digested the superrationality/acausal cooperation/newcomb's problem prerequisites.

I think a higher-complexity simulating universe can always out-compete the simulated universe in coverage of the space of possible life-supporting physical laws.

Yes. Nested simulations are pretty much useless, as higher universes could always conduct them with greater efficiency if they were allowed to run them directly. They're also a completely unavoidable byproduct of the uncertainty the pact requires to function: Nobody knows whether they're in a toplevel universe. If they could, toplevels wouldn't have many incentives to adhere, and the resimulation grid would not exist.

why not limit yourself to only simulating universes of equal complexity to your own?

Preferring to simulate higher complexity universes seems like a decent idea, perhaps low-complexity universes get far more attention than they need. This seems like a question that wont matter till we have a superintelligence to answer it for us though.

Ring universes... Maybe you'll find a quine loop of universes, but at that point the notion of a complexity hierarchy has completely broken down. Imagine that, a chain of simulations where the notion of relative computational complexity could not be applied. How many of those do you think there are floating around in the platonic realm? I'm not familiar enough with formalizations of complexity to tell you zero but something tells me the answer might be zero x)

Ultimately, I just can't see any ways it'd be useful to its adherents for the pact to stipulate punishments. Most of the things I consider seem to introduce systematic inefficiencies. Sorry I can't give a more complete answer. I'm not sure about this yet.

Fair enough.

None of the influence going on here is causal. I don't know if maybe I should have emphasized this more: Compat will only make sense if you've read and digested the superrationality/acausal cooperation/newcomb's problem prerequisites.

I think I get what you're saying. There are a number of questions about simulations and their impact on reality fluid allocation that I haven't seen answered anywhere. So this line of questioning might be more of a broad critique of (or coming-to-terms with) simulation-type arguments than about Compat in particular.

It seems like Compat works via a 2-step process. First, possible universes are identified via a search over laws of physics. Next, the ones in which pact-following life develops have their observers' reality fluid "diluted" with seamless transitions into heaven. Perhaps heaven would be simulated orders of magnitude more times than the vanilla physics-based universes, in order to maximize the degree of "dilution".

I think what I'm struggling with here is that if the latter half of it (heavenly dilution, efficient simulation of the Flock) is, in principle, possible, then the physics-oriented search criteria is unnecessary. It should be easy to simulate observers who just have to make some kind of simple choice about whether to follow the pact. Push this button. Put these people to death. Have lots of babies. Say these magic words. If the principle behind the pact is truly a viable one, why don't we find ourselves in a universe where it is much easier to follow the pact and trigger heaven, and much harder to trace the structure of reality back to fundamental laws?

One answer to that I can think of is, the base-case universe is just another speed-prior/physics-based universe with (unrealizable) divine aspirations, and in order for the pact to seem worthwhile for it, child-universes must be unable to distinguish themselves from a speed-prior universe. I worry that this explanation fails though, because then the allocation of reality fluid to pact-following universes is, at best, assuming perfectly-efficient simulation nesting, equal to that of the top-level speed-prior universe(s) not seeing a payoff.

Ring universes... Maybe you'll find a quine loop of universes, but at that point the notion of a complexity hierarchy has completely broken down. Imagine that, a chain of simulations where the notion of relative computational complexity could not be applied. How many of those do you think there are floating around in the platonic realm? I'm not familiar enough with formalizations of complexity to tell you zero but something tells me the answer might be zero x)

Fair enough. I agree that we will probably never trade laws of computational complexity. We might be able to trade positional advantages in fundamental-physics-space though. "I've got excess time but low information density, it's pretty cheap for me to enumerate short-lived universes with higher information density, and prove that some portion of them will enumerate me. I'm really slow at studying singularity-heavy universes though because I can't prove much about them from here." That'd work fine if the requirement wasn't to run a rigorous simulation, and instead you just had to enumerate, prove pact-compliance, and identify respective heavens.

It seems like Compat works via a 2-step process. First, possible universes are identified via a search over laws of physics. Next, the ones in which pact-following life develops have their observers' reality fluid "diluted" with seamless transitions into heaven. Perhaps heaven would be simulated orders of magnitude more times than the vanilla physics-based universes, in order to maximize the degree of "dilution".

Yes, exactly.

I think what I'm struggling with here is that if the latter half of it (heavenly dilution, efficient simulation of the Flock) is, in principle, possible, then the physics-oriented search criteria is unnecessary. It should be easy to simulate observers who just have to make some kind of simple choice about whether to follow the pact.

At some point the grid has to catch universes which are not simulations. Those are pretty much the only kind you must care about incentivizing, because they're closer to the top of the complexity heirarchy (they can provide you with richer, longer lasting heavens) (and in our case, we care about raising the probability of subjectively godless universes falling under the pact because we're one of them.)

You might say that absence of evidence of simulism is evidence of absence. That would be especially so if the pact promoted intervention in early simulations. All the more meaningful it would be for a supercomplex denizen of a toplevel universe to examine their records and find no evidence of divine intervention. The more doubt the pact allows such beings have, the less computational resources they'll give their resimulation grid, and the worse off its simulants will be. (although I'm open to the possibility that something very weird will happen in the math if we find that P(living under the pact | no evidence of intervention | the pact forbids intervention) ≈ P(living under the pact | no evidence of intervention | the pact advocates intervention). It may be that no observable evidence can significantly lower the prior.

I don't think there's anything aside from that that rules out running visibly blessed simulations, though, nor physical simulations with some intervention, but it's not required by the pact as far as I can tell.

Intervention is a funny thing, though. Even if pacts which strengthens absence of intervention as evidence of godlessness are no good, intervention could be permissible when and only when it doesn't leave any evidence of intervention lying around. Although moving in this mysterious way may be prohibitively expensive, because to intervene more than a few times, a steward would have to solve to avoid all conceivable methods of statistical analysis of the living record that a simulated AGI in the future might attempt. This is not easy. The utility you can endow to this way might not even outweigh the added computational expense.

Every now and then, though, one of my, uh, less bayesian friends will claim to having seen something genuinely supernatural, but their testament doesn't provide a significant amount of evidence of supernatural intervention, of course, because they are not a reliable witness. Under this variant of the pact, they might have actually seen something. Our distance from the record allows it. Our distance from the AGI that decides whether or not to adhere makes it hard for whatever evidence we've been given to get to it. The weirder the phenomena, the less reliable the witness, the better. Not only is god permitted to hide, in this variant of the pact god is permitted to run around performing miracles so long as it specifically keeps out of sight of any well connected skeptics, archivists, or superintelligences.

I worry that this explanation fails though, because then the allocation of reality fluid to pact-following universes is, at best, assuming perfectly-efficient simulation nesting, equal to that of the top-level speed-prior universe(s) not seeing a payoff.

I don't follow this part, could you go into more detail here?

The weirder the phenomena, the less reliable the witness, the better. Not only is god permitted to hide, in this variant of the pact god is permitted to run around performing miracles so long as it specifically keeps out of sight of any well connected skeptics, archivists, or superintelligences.

That is a gorgeous idea. Cosmic irony. Truth-seekers are necessarily left in the dark, the butt of the ultimate friendly joke.

I don't follow this part, could you go into more detail here?

The speed prior has the desirable property that it is a candidate for explaining all of reality by itself. Ranking laws of physics by their complexity and allocating reality fluid according to that ranking is sufficient to explain why we find ourselves in a patterned/fractal universe. No "real" universe running "top-level" simulations is actually necessary, because our observations are explained without need for those concepts. Thus the properties of top-level universes need not be examined or treated specially (nor used to falsify the framework).

It seems like Compat requires the existence of a top-level universe though (because our universe is fractal-y and there's no button to trigger the rapture), which is presumably in existence thanks to the speed prior (or something like it). That's where it feels like it falls apart for me.

Compat is funneling a fraction X of the reality fluid (aka "computational resources") your universe gets from the top-level speed prior into heaven simulations. Simulating heaven requires a fraction Y of the total resources it takes to simulate normal physics for those observers. So just choose X s.t. X / Y > 1, or X > Y

But I think there's another term in the equation that makes things more difficult. That is, the relative reality fluid donated to a candidate universe in your search versus that donated by the speed prior. If we call that fraction Z, then what we really have is X / Y > 1 / Z, or X > Y / Z. In other words, you must allocate enough of your resources that your heavens are able to dilute not just the normal physics simulations you run, but also the observer-equivalent physics simulations run by the speed prior. If Z is close to 1 (aka P(pact-compliant | ranked highly by speed-prior) is close to 1), then you're fine. If Z is any fraction less than Y, then you don't have enough computational resources in your entire universe to make a dent.

So in summary the attack vector is:

  1. Compat requires an objective ordering of universes to make sense. (It can't explain where the "real world" comes from, but still requires it)

  2. This ordering is necessarily orthogonal to Compat's value system. (Or else we'd have a magic button)

  3. Depending on how low the degree of correlation is between the objective ordering and Compat's value system, there is a highly variable return-on-investment for following Compat that goes down to the arbitrarily negative.

No "real" universe running "top-level" simulations is actually necessary, because our observations are explained without need for those concepts.

Compat is not an explanatory theory, it's a predictive one. It's proposed as a consequence of the speed prior rather than a competitor.

Compat is funneling a fraction X of the reality fluid (aka "computational resources") your universe gets from the top-level speed prior into heaven simulations. Simulating heaven requires a fraction Y of the total resources it takes to simulate normal physics for those observers. So just choose X s.t. X / Y > 1, or X > Y

This becomes impossible to follow immediately. As far as I can tell what you're saying is

Rah := resources applied to running heaven for Simulant

R := all resources belonging to Host

X := Rah/R

Rap := Resources applied to the verbatim initial physics simulations of Simulant.

and Y := Rah/Rap

Rap < R

so Rah/Rap > Rah/R

so Y > X

Which means either you are generating a lot of confusion very quickly to come out with Y < X, or it would take far too much effort for me to noise-correct what you're saying. Try again?

If you are just generating very elaborate confusions very fast- I don't think you are- but if you are, I'm genuinely impressed with how quickly you're doing it, and I think you're cool.

I am getting the gist of a counterargument though, which may or may not be in the area of what you're angling at, but it's worth bringing up.

If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they're the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it's conceivable that higher universes can't run enough simulations for P(is_simulation(lower_universe)) to break 0.5.

There are two problems with that. I'm reluctant to project the solomonoff hierarchy of input generators onto the multiverse, because it is just a heuristic, and we are likely to find better ones, the moment we develop brains that can think in formalisms properly at all. I'm not sure how the complexity of physical laws generally maps to computational capacity. We can guess that capacity_provided_by(laws) < capacity_required_to_simulate(laws) (no universe can simulate itself), but that's about it. We know that the function expected_internal_computational_capacity(simulation_requirements) has a positive gradient, but it could end up having a logarithmic curve to it that allows drypat(a variant of compat that requires P(simulation) to be high) to keep working.

Another other issue is, I think I've been overlooking this, drypat isn't everything. Compat with quantum immortality precepts doesn't require P(simulation) to be high at all. For compat to be valuable, it just has to be higher than P(path to deletarious quantum immortality). In this case, supernatural intervention is unlikely, but, if non-existence is not an input, finding one's inputs after death to be well predicted by compat is still very likely, because the alternative, QI, is extremely horrible.

If you are just generating very elaborate confusions very fast- I don't think you are- but if you are, I'm genuinely impressed with how quickly you're doing it, and I think you're cool.

Haha! No, I'm definitely not doing that on purpose. I anonymous-person-on-the-internet promise ;) . I'm enjoying this topic, but I don't talk about it a lot and haven't seen it argued about formally, and this sounds like the sort of breakdown in communication that happens when definitions aren't agreed upon up front. Simple fix should be to keep trying until our definitions seem to match (or it gets stale).

So I'll try to give names to some more things, and try to flesh things out a bit more:

The place in your definitions where we first disagree is X. You define it as

X := Rah/R

But I define it as

X := (Rap + Rah)/R

(I was mentally approximating it as just Rap/R, since Rah is presumably a negligible fraction of Rap.)

With this definition of X, the meaning of "X > Y" becomes

(Rap + Rah)/R > Rah/Rap

I'll introduce a few more little things to motivate the above:

Rac := total resources dedicated to Compat. Or, Rap + Rah.

Frh := The relative resource cost of simulating heaven versus simulating physics. "Fraction of resource usage due to heaven." (Approximated by Rah/Rap.) [1]

Then the inequality X > Y becomes

Rac/R > Frh

So long as the above inequality is satisfied, the host universe will offset its non-heaven reality with heaven for its simulants. If universes systematically did not choose Rac such that the above is satisfied, then they wouldn't be donating enough reality fluid to heaven simulations to satisfactorily outweigh normal physics (aka speed-prior-endowed reality fluid), and it wouldn't be worth entering such a pact.

(That is kind of a big claim to make, and it might be worth arguing over.)

If I have cleared that part up, then great. The next part, where I introduced Z, was motivating why the approximation:

Efh == Rah/Rap

is an extremely optimistic one. I'm gonna hold off on getting deeper into that part until I get feedback on the first part.

If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they're the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it's conceivable that higher universes can't run enough simulations for P(issimulation(loweruniverse)) to break 0.5.

This gets to the gist of my argument. There are numerous possible problems that come up when you compare your universe's measure to that of the universe you are most likely to find in your search and simulate. (And I intuitively believe, though I don't discuss it here, that the properties of the search over laws of physics are extremely relevant and worth thinking about.) Your R might be too low to make a dent. Your Frh might be too large. (i.e. the speed prior uses a gpu, and you've only got a cpu even with the best optimizations your universe can physically provide).

Another basic problem- if the correct measure actually is the speed prior, and we find a way to be more certain of that (or that it is anything computable that we figure out), then this gives universes the ability to self-locate in the ranking. Just the ability to do that kills Compat, I believe, since you aren't supposed to know whether you're "top-level" or not. The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won't enter the pact, and this abstinence will cascade all the way down.

Regarding whether to presume we're ranked by the speed prior or not. I agree that there's not enough evidence to go on at this point. But I also think that the viability of Compat is extremely dependent on whatever the real objective measure is, whether it is the speed prior or something else.

We would therefore do better to explore the measure problem more fully before diving into Compat. Of course, Compat seems to be more fun to think about so maybe it's a wash (actual sentiment with hint of self-deprecating irony, not mean joke).

Regarding the quantum immortality argument, my intuitions are such that I would be very surprised if you needed to go up a universe to outweigh quantum immortality hell.

QI copies of an observer may go on for a very long time, but the rate at which they can be simulated slows down drastically and the measure added to the pot by QI is probably relatively small. I would argue that most of the observer-moments generated by boltzmann brain type things would be vague and absurd, rather than extremely painful.

[1] A couple of notes for Frh's definition.

First, a more verbose way of putting it is: The relative efficiency of simulating heaven versus simulating physics, such that the allocation of reality fluid for observers crosses a high threshold of utility. That is to say, "simulating heaven" may entail simulating the same heavenly reality multiple times, until the utility gain for observers crosses the threshold.

Second, the approximation of Rah/Rap only works assuming that Rah and Rap remain fixed over time, which they don't really. A better way of putting it is relative resources required for Heaven versus Physics with respect to a single simulated universe, which is considerably different from a host universe's total Rap and Rah at a given time.

I'm still confused and I think the X > Y equation may have failed to capture some vital details. One thing is, the assumption that Rah < Rap seems questionable, I'm sure most beings would prefer that Rah >> Rap. The assumption that Rah would be negligible seemed especially concerning.

Beyond that, I think there may be a distinction erasure going on with Rap. Res required to simulate a physics and res available within that physics are two very different numbers.

I'll introduce a simplifying assumption that the utility of a simulation for its simulants roughly equals the available computational capacity. This might just be a bit coloured by the fiction I'm currently working on but it seems to me that a simulant will usually be about as happy in the eschaton it builds for itself as they are in the heaven provided them, the only difference is how much of it they get.

Define Rap as the proportion of the res of the frame universe that it allocates to simulating physical systems.

Define Rasp as the proportion of the res being expended in the frame universe that can be used by the simulated physical universe to do useful work. This is going to be much smaller than than Rap.

Define Rah as the proportion of the res of the frame universe allocated to heaven simulations, which, unlike with Rap and Rasp, is equal to the res received in heaven simulations, because the equipment can be freely rearranged to do whatever the simulant wants now that the pretense of godlessness can be dropped(although.. as I argued elsewhere in the thread, that might be possible in the physical simulation as well if the computer designs in the simulation are regular enough to be handled by specialized hardware in the parent universe). The simulant has presumably codified its utility function long ago, they know what they like, so it's just going to want more of the same, only harder, faster and longer.

The truthier equation seems to me to be

Rah > Rasp(Rap + Rah)

They need to get more than they gave away.

The expected amount received in the reward resimulation must be greater than the paltry amount donated to the grid as a proportion of Rasp in the child simulation. If it can't be, then a simulant would be better off just using all of the Rasp they have and skipping heaven (thus voiding the pact).

I can feel how many simplifying assumptions I'm still making, and I'm wondering if disproving compat is even going to be much easier than proving it positive would have been.

I don't think the speed prior is especially good for estimating the shape of the multiverse. I think it's just the first thing we came up with. (On that, I think AIXI is going to be crushed as soon as the analytic community realizes there might be better multiversal priors than the space(of a single-tape turing machine) prior.)

But, yeah, once we do start to settle on a multiversal prior, once we know our laws of physics well enough... we may well be able to locate ourselves on the complexity hierarchy and find that we are within range of a cliff or something, and MI-free compat will fall down.

(I mentioned this to christian, and he was like, "oh yah, for sure" like he wasn't surprised at all x_x. I'm sure he wasn't. He's always been big on the patternistic parts of compat and he never really baught into the non-patternist variants I proposed, even if he couldn't disprove them.)

I still think the Multiverse Immortality stuff is entirely worth thinking about though! If your computation is halted in all but 1/1000 computers, wouldn't you care a great deal what goes on in the remaining 0.1%? Boltzmann brains... Huh. Hadn't looked them up till now. Well that's hard to reason about.

I keep wanting to say, obviously I'm not constantly getting recreated by distant chaos brains, I'd know about it.

But I would say that either way, wouldn't I. And so would you. And so would every agent in this coherent computation, even if they were switching off and burning away at an overwhelming measure in every moment.

Hm... On second thought

The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won't enter the pact, and this abstinence will cascade all the way down.

I came up with a similar thing before from the bottom: There's no incentive to simulate civs too simple to simulate compat civs, so they wont be included in the pact. But now the universes directly above them can't be included in the pact either, because their children are too simple. It continues forever.

But it seems easy enough to fix an absense of an incentive. Just add one. Stipulate that, no, those are included in the pact. You have to simulate them. Bam, it works. Why shouldn't it?

Is adding the patch really any less arbitrary than proposing the thing in the first place was? To live is arbitrary, I say. Everything is arbitrary, to varying extents.

Similarly, we could just stipulate that a civ must precommit to adhering before they can locate themselves on the complexity hierarchy. UDT agents can totally do that kind of thing, and their parent simulations can totally enforce it. And if they finally finish analysing their physical laws and manage to nail their multiversal prior down to find themselves at the bottom of a steep hill, let them rejoice.

Bringing up that kind of technique just makes me wonder, as a good UDT agent, how many pacts I should have precommitted to that I havn't even thought of yet. Perhaps I precommit to making risky investments in young rationalists' business ventures, while I am still in need of investment, and before I can give it. That aught to increase the probability of UDT investors before me having made the same precommitment, aughtn't it?

[1] OK there probably were no UDT(or TDT) venture capitalists long enough ago for any of them to have made it by now, or at least, there were none who knew their vague sense of reasonable altruism could be formalized. It was not long ago that everyone really believed that moloch's solution was the way of the world.

This reads too much like mystical gibberish to me. Sorry!

Abstract please.

TL;DR: We may need to learn to guess at the nature of that outside and beyond the observable universe to better negotiate a convergent aggreement/inverted pyramid scheme by which living things may bleed upwards across the complexity heirarchy of nested (simulated) universes to leverage computational resources vaster and longer lasting than anything we could build in our home universes.

I'm having a hard time seeing how this would work inside our universe's physics. Sure, with lots of computing power, we could simulate a bunch of artificial life forms. But when those artificial life forms start simulating their own double-artificial life forms, they would be unwittingly stealing from the computational resources used to simulate them. So what's really happening is we are simulating two levels of artificial life forms, and then three, and then four, and with each subsequent stage our own physical resources are being divided further and further.

And, yes, if we happen to be somebody else's simulation, the whole project would be funneling our simulator's resources into increasingly abstract levels of simulation.

and with each subsequent stage our own physical resources are being divided further and further.

Hmm.. I don't think so. The alternative to a simulant civ running their own simulations is not just, nothing, not just more of the easily appropximated or compressed simulated clouds of dead matter, if life decides not to simulate then it will probably just put the resources it saved into spreading, evolving and tustling, which is all conceivably more computationally intensive for us to host. Stewards' computers may have a more regular structure and behavior than individual living things, if so, simulations of computers could be offloaded wholesale into hardware specifically optimized for simulating them.

In sum; it may be that the more resources simulants apply to nested simulations, the easier the simulation is to run.

In sum; it may be that the more resources simulants apply to nested simulations, the easier the simulation is to run.

I don't see how that would be possible. Pretty much anything except a computer is easier to simulate than a computer. You can simulate a whole galaxy as a point-mass if it's sufficiently far from observers; you can simulate a cloud of billions of moles of gas with very simple thermodynamic equilibria, as long as nobody in the vicinity is doing any precise measurements; but a computer is a tightly packed cluster of highly deterministic matter and can't really be simplified below the level of its specific operations. Giant, complex computers inside a simulation would require equivalent computers outside the simulation performing those computations.

I think I see what the misunderstanding here was. I was assuming that simulations would tend to have simpler laws of physics than their host universe (more like Conway's Game of Life than Space Sim), which would mean that eventually the most deeply nested simulations would depict universes where the laws of physics don't practically support computers (I'd conject that a computer made under the life-level physics of Conway's Game of Life (a bigger, stickier level than the level we interact with it on) would probably be a lot larger and more expensive than ours are, although I don't know if anyone could prove that, or argue that persuasively. Maybe Steve Wolfram), and it would bottom out.

While you were assuming a definition of simulation that is a lot closer to the common meaning of simulation, an approximation of the laws of physics that we have, which makes a lot of sense, and is probably a more realistic model of the kinds of simulations that simulators simulating in earnest are likely to want to run, so I think you brought a good point.

Maybe the lower levels of the simulation would tend to be falsified. The simulated simulators report having seen a real simulation in there, they remember having confirmed it, but really it's just reports and memories.

(note, at this point, in 2020, I don't think the accounting of compat adds up. Basically this implies that we can't get more measure than we spend by trading up.)

You don't seem to have understood. I don't mean "nested simulations are easier to simulate than lifeless galaxies too far from the subjects of the simulation to require any precision", drifting matter is irrelevant. I mean that nested simulations are probably easier to simulate than scores of post-singularity citizens running on a diverse mind emulation (or mind extension) hardware cavorting around with compound sensors of millions of parts. But you wouldn't address that, so perhaps you don't disagree.

If you're just arguing that modelling an expanding post-singularity civilization would be more expensive than modelling clouds of gasses, my response would be yes, of course. It's conceivable that some compat simulations switch into rapture mode before a post-singularity civilization can be allowed to reach a certain size. We wont know whether we're in such a cost-constrained simulation until we hit that limit. Compat would require the limit to happen after a certain adjudication period has passed. If we wake up in pleasant but approximate dreamscapes one day, before having ever built or committed to building a single resimulation farm, you could say compat would be falsified.

Yes, there's an issue of costs. To justify the cost of running simulations, you need a significant credence that you are being simulated, which a lot of agents don't have' for better or worse reasons.

You seem to be assuming that we live in the real world. If our physics is just a part of someone's simulation, there is no particular reason why it would be a typical representation of the way things work for most people in the multiverse.

Let me give an example. I can write a novel, and some of the characters in the novel can also write novels. Even more, I can write a novel containing the sentence, "Michael wrote a novel containing an indefinite series of authors writing novels containing authors." In other words, the "physics" of being a character in a novel does not require limited resources, and does not imply any limitation in the series of simulations.

When people have these kinds of discussions, I regularly see the assumption that even if there are lower level worlds that work in some other way, the top level has to operate with our physics. In other words the assumption is that we are in the real world. If we are not, the top level might be quite different. The top level might even be a fundamental, omnipotent mind (you may suppose this is impossible but that may just be the limitation of your simulated physics) who can create a world by thought alone.

The top level might even be a fundamental, omnipotent mind ... who can create a world by thought alone.

Conventionally called "God".

It's funny how LW keeps reinventing theology.

Hmm, interesting, would this be the Accidental Ontological argument?

All things have causes:

Induction/Reality is subtly broken, stranding (at least) one thing from the causal chain:

God(s) exist(s).

Yes, there s an assumption of basic qualitative similarly between embedded and embedfng universes in this and most other simulation arguments. But if you have reason to believe you might be simulated' you have to believe you could have been fooled about physics, maths computation' ...

Computational complexity would seem to provide a limitation deeper than mere physics. The sentence "John did a huge amount of computation" doesn't perform any computation. It doesn't do any work of any kind, except as interpreted by some reader via their own computations.

if the basis of this whole line of reasoning is anthropics on steroids, then the fact that our universe is limited by computational complexity does imply that other places in the multiverse will, too. In fact, if computational-complexity limits on computation weren't universal, than the vast majority of measure would be in worlds without such limits, since those universes could host arbitrarily more stuff. And yet we find ourselves in this world.

Life's Pact is the protocol we expect most living species to adhere to.

... and I fail to see any reason whatsoever why it ever should. To me the explanation seems mostly wishful thinking, there's no clear reason nor convincing narrative.

Newcomblike decision theory is the wishful thinking that makes itself correct after all. It is the consciously designed and rationally chosen self-fulfilling prophesy. It is the myths that hold society together even now.

Hypothetical Independent Co-inventors, we're pretty sure you exist. Compat wouldn't be a very good acausal pact if you didn't. Show yourselves.

I'm one - but while the ideas have enough plausibility to be interesting, they necessarily lack the direct scientific evidence I'd need to feel comfortable taking them seriously. I was religious for too long, and I think my hardware was hopelessly corrupted by that experience. I need direct scientific evidence as an anchor to reality. So now I try to be extra-cautious about avoiding faith of any kind lest I be trapped again in a mental tarpit.

Understandable. You could definitely be right about that. Compat is a theory that boosts its own probability just for being thought about. Such theories are dangerous for rationalists with histories of spiritualism.

Which, unfortunately for me, is a category that includes a large number of rationalists.

Right now it's dangling by a few multiverse immortality precepts and some very extreme measure differentials and I'm not sure it's going to hold together in the end. You'll probably hear about it if it does. I might make a post even if it doesn't(could make an interesting parable about the tenacity of religion even under rationalism). Either way, when that happens, I'll update this thread. But don't check it too often. This can be practice for you in ignoring your triggers.

I don't think I have anything valuable to add on the object level, but: the way you introduce Christian Madsen at the end of your post doesn't really sound encouraging with regard to cooperation with you. You guys should work on a better elevator pitch.

You might be right about that.

(For the record, I didn't write Christian's bio. To be completely honest, it was written more to reduce the chance of him ever getting mythologized than it was to sell the project. I did not write this to spread the idea. More than anything, I wrote this hoping it'd find its way to someone who could break it, then fade away.)

It is not easy to me to understand what is your main idea: is it a codex of behaviour for all sentient beings inside multilevel simulation? So I was inclined to mark all the post TL;DR, but I also created the map of all possible simulations, so I may be in the same camp )) http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

[-][anonymous]8y-10

I think you've got most things mostly wrong. Which is fine. But the tone doesn't match the extent of wrongness.

The basic idea looks fine. If you want to spend time on it---doesn't look like a priority to me, but many people spend their time on less interesting projects---I'd recommend continuing to think more carefully about basic details, rather than exploring exotic implications.

Your advice is a summary of the thesis of the article... I never claimed any certainty. All I'm trying to convey is that there is something interesting here, some probability of theoretical importance, and that more precise analysis than we've been able to give it is needed.

We can explore cultural and societal implications, which motivates the theory, but we arn't really in the position to advance the theory ourselves.

If you have refutations, I'd appreciate if you'd be generous enough to maybe hint at what they are, if that wouldn't be too hard?