In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,

Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.

I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation. 

Joe Carlsmith's essay, Simulation Arguments, clarified some nuances, but ultimately the argument's conclusion remains the same.

When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.

It's just that, the claims below feel so obvious to me:

  • It is physically possible to simulate a conscious mind.
  • The universe is very big, and there are many, many other aliens.
  • Some aliens will run various simulations.
  • The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )

When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.

Thank you for reading all this. Now, I'll ask for your help. 

Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.

Thank you again.

New Answer
New Comment


13 Answers sorted by

It is physically possible to simulate a conscious mind.

... but it's expensive, especially if you have to simulate its environment as well. You have to use a lot of physical resources to run a high-fidelity simulation. It probably takes irreducibly more mass and energy to simulate any given system with close to "full" fidelity than the system itself uses. You can probably get away with less fidelity than that, but nobody has provided any explanation of how much less or why that works.

There are other, more interesting and important ways to use that compute capacity. Nobody sane, human or alien, is going to waste it on running a crapton of simulations.

Also, nobody knows that all the simulated minds wouldn't be p-zombies, because, regardless of innumerable pompous overconfident claims, nobody understands qualia. Nobody can prove that they're not a p-zombie, but do you think you're a p-zombie? And do we care about p-zombies?

The universe is very big, and there are many, many other aliens.

If that's true, and you haven't provided any evidence for it, then those aliens have many, many other things to simulate. The measure of humans among random aliens' simulations is going to be tiny if it's not zero.

Some aliens will run various simulations.

Again, that doesn't imply that they're going to run enough of them for them to dominate the number of subjective experiences out there, or that any of them will be of humans.

Future humans, or human AI successors, if there are any of either, will probably also run "various simulations", but that doesn't mean they're going to dump the kind of vast resources you're demanding into them.

The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans.

Um, no? Because all of the premises you're using to get there are wrong.

(By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )

By that definition, a simulation that bounces frictionless billiard balls around and labels them as humans is "subjectively indistinguishable" from our own, since the billiard balls have no cognition and can't tell anything about anything at all. You need to do more than that to define the kind of simulation you really mean.

There are other, more interesting and important ways to use that compute capacity. Nobody sane, human or alien, is going to waste it on running a crapton of simulations.

Counterpoint: speedrunning and things like 'Twitch plays', which are some of the most popular streaming genres in existence, and exist largely because they are unimportant. A TAS speedrunner may well run millions or billions of simulations simply to try to shave off 1s from the record. (An example I like to cite uses 6 CPU-years to bruteforce NES Arkanoid to achieve nearly optimal play. Unfortunately, he doesn't provide the wallclock equivalent, but I strongly suspect that this project alone simulates more minutes of NES Arkanoid than it was ever played by humans. If not, then I'm quite sure at this point that NES Mario has been played in silico OOMs more than by humans. Plenty of projects like 'My First NEAT project' will do a few years or centuries of NES Mario.)

There are other, more interesting and important ways to use that compute capacity. Nobody sane, human or alien, is going to waste it on running a crapton of simulations.

This is a very silly argument, given the sorts of things we use compute capacity for, in the real world, today.

Pick the most nonsensical, absurd, pointless, “shitpost”-quality webcomic/game/video/whatever you can think of. Now find a dozen more like it. (This will be very easy.) Now total up how much compute capacity it takes to make those things happen, and imagine going back to 1950 or whenever, and telling them that, for one teenager to watch one cat video (or whatever else) on their phone takes several orders of magnitude more compute capacity than exists in their entire world, and that not only do we casually spend said compute on said cat video routinely, as a matter of course, without having to pay any discernible amount of money for it, but that in fact we regularly waste similar amounts of compute on nothing at all because some engineer forgot to put a return statement in the right place and so some web page or other process uses up CPU cycles needlessly, and nobody really cares enough to fix it.

People will absolutely waste compute capacity on running a crapton of simulations.

(And that’s without even getting into the “sane” caveat. Insane people use computers all the time! If you doubt this, by all means browse any social media site for a day…)

I haven't heard the p zombie argument before, but I agree that is at least some Bayesian evidence that we're not in a sim. 

 

  1. We don't know if simulated people will be p zombies
  2. I am not a p zombie [citation needed]
  3. It would be very surprising if sims were not p zombies but everyone in the physical universe is
  4. Therefore the likelihood ratio of being conscious is higher for the real universe than a simulation

Probably 3 needs to be developed further, but this is the first new piece of evidence I've seen since I first encountered the simulation argument in like 2005.

I've never understood why people make this argument:

but it's expensive, especially if you have to simulate its environment as well. You have to use a lot of physical resources to run a high-fidelity simulation. It probably takes irreducibly more mass and energy to simulate any given system with close to "full" fidelity than the system itself uses.

Let's imagine that we crack the minimum requirements for sentience.  I think we already may have accidentally done so, but table that for a moment.  Will it really require that we simulate the entire hum... (read more)

The general problem with Bostrom's argument is that it tries to apply incorrect probabilstic model. It implicitly assumes independence where there is causal connection, therefore arriving to a wrong conclusion. Similarly to conventional reasoning in Doomsday Argument or Sleeping Beauty problems.

For future humans, say in year 3000, to create simulations of year 2025, first actual year 2025 has to happen in the base reality. And then all the next years up to 3000. We know about it very well. Not a single simulation can happen unless an actual reality happens first.

And yet Bostroms models our knowledge about this setting as if we participate in a probability experiment with random sample between many "simulation" outcomes and one "reality" outcome. The inadequacy of such modelling should be obvious. Consider:

There is a bag with a thousand balls. One red and 999 blue. First a red ball is picked from the bag. Then all the blue balls are picked one by one.

and compare it to

There is a bag with a thousand balls. One red and 999 blue. For a thousand iterations a random ball is picked from the bag.

Clearly, the second procedure is very different from the first. The mathematical model that describes it doesn't describe the first at all for exactly the same reasons why Bostrom's model doesn't describe our knowledge state.

However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.

There is not much practical advise following from simulation argument. One I heard that we should try to live most interesting lives, so the simulators will not turn our simulation off. 

Richard Carrier has written an entire rebuttal of the argument on his blog. He always answers comments under his posts, so if you disagree with some part of the rebuttal, you can just leave a comment there explaining which claim makes no sense to you. Then, he will usually defend the claim in question or provide the necessary clarification.

It looks like he argues against the idea that friendly future AIs will simulate the past based on ethical grounds, and imagining unfriendly AI torturing past simulations is conspiracy theory. I comment the following:

There are a couple of situations where future advance civilization will want to have many past simulation:
1. Resurrection simulation by Friendly AI.  They simulate the whole history of the earth incorporating all known data to return to live all people ever lived. It can also simulate a lot of simulation to win "measure war" against unfrie... (read more)

7jbash
Some of those people may be a bit cheesed off about that, speaking of ethics. Assuming it believes "measure war" is a sane thing to be worrying about. In which case it disagrees with me. There seems to be a lot of suffering in the "simulation" we're experiencing here. Where's the cure? That sounds like a remarkably costly and inefficient way to get not that much information about the Fermi paradox.
2avturchin
I think a more meta-argument is valid: it is almost impossible to prove that all possible civilizations will not run simulations despite having all data about us (or being able to generate it from scratch). Such proof would require listing many assumptions about goal systems and ethics, and proving that under any plausible combination of ethics and goals, it is either unlikely or immoral. This is a monumental task that can be disproven by just one example. I also polled people in my social network, and 70 percent said they would want to create a simulation with sentient beings. The creation of simulations is a powerful human value. More generally, I think human life is good overall, so having one more century of human existence is good, and negative utilitarianism is false. However, I am against repeating intense suffering in simulations, and I think this can be addressed by blinding people's feelings during extreme suffering (temporarily turning them into p-zombies). Since I am not in intense suffering now, I could still be in a simulation. Now to your counterarguments: 1. Here again, people who would prefer never to be simulated can be predicted in advance and turned into p-zombies. 2. While a measure war is unlikely, it by definition generates so much measure that we could be in it. It also solves s-risks, so it's not a bad idea. 3. Curing past suffering is based on complex reassortment of observer-moments, details of which I would not discuss here. Consider that every moment in pain will be compensated by 100 years in bliss, which is good from a utilitarian view. 4. It is actually very cost-effective to run a simulation of a problem you want to solve if you have a lot of computing power.
4Dakara
I am sorry to butt into your conversation, but I do have some points of disagreement. I think that's a very high bar to set. It's almost impossible to definitively prove that we are not in a Cartesian demon or brain-in-a-vat scenario. But this doesn't mean that those scenarios are likely. I think it is fair to say that more than a possibility is required to establish that we are living in a simulation. I think that some clarifications are needed here. How was the question phrased? I expect that some people would be fine with creating simulations of worlds where people experience pure bliss, but not necessarily our world. I would especially expect this if the possibility of "pure bliss" world was explicitly mentioned. Something like "would you want to spend resources to create a simulation of a world like ours (with all of its "ugliness") when you could use them to instead create a world of pure bliss. Would you say that someone who experiences intense suffering should drastically decrease their credence in being in a simulation? Would someone else reporting to have experienced intense suffering decrease your credence in being in a simulation? Why would only moments of intense suffering be replaced by p-zombies? Why not replace all moments of non-trivial suffering (like breaking a leg/an arm, dental procedures without anesthesia, etc) with p-zombies? Some might consider these to be examples of pretty unbearable suffering (especially as they are experiencing it). From a utilitarian view, why would simulators opt for Ressurection Simulation? Why not just simulate a world that's maximally efficient at converting computational resources into utility? Our world has quite a bit of suffering (both intense and non-intense), as well as a lot of wasted resources (lots of empty space in our universe, complicated quantum mechanics, etc). It seems very suboptimal from a utilitarian view. Why would an Unfriendly AI go through the trouble of actually making us conscious? Surel
2avturchin
We have to create a map of possible scenarios of simulations first, I attempted to it in 2015.  I now created a new vote on twitter. For now, results are: "If you will be able to create and completely own simulation, you would prefer that it will be occupied by conscious beings, conscious without sufferings (they are blocked after some level), or NPC" The poll results show: * Conscious: 18.2% * Conscious, no suffering: 72.7% * NPC: 0% * Will not create simulatio[n]: 9.1% The poll had 11 votes with 6 days left'   Yes. But I never experienced in my long life such intense sufferings.  No. Memory about intense sufferings are not intense.    Yes, only moments. The badness of not-intense sufferings is overestimated, in my personal view, but this may depend on a person.  More generally speaking, what you presenting as global showstoppers, are technical problems that can be solved. In my view, individuality is valuable.  As we don't know nature of consciousness, it can be just side effect of computation, not are trouble. Also it may want to have maximal fidelity or even run biological simulations: something akin to Zoo solution of Fermi paradox.    We are living in one of the most interesting periods of history which surely will be studied and simulated. 
1Dakara
If preliminary results on the poll hold, then that would be pretty in line with my hypothesis of most people preferring creating simulations with no suffering over a world like ours. However, it is pretty important to note that this might not be representative of human values in general, because looking at your Twitter account, your audience comes mostly from a very specific circles of people (those interested in futurism and AI). I was mostly trying to approach the problem from a slightly different angle. I wasn't meant to suggest that memories about intense suffering are themselves intense. As far as I understand it, your hypothesis was that Friendly AI temporarily turns people into p-zombies during moments of intense suffering. So, it seems that someone experiencing intense suffering while conscious (p-zombies aren't conscious) would count as evidence against it. Reports of conscious intense suffering are abundant. Pain from endometriosis (a condition that affects 10% of women in the world) has been so brutal that it made completely unrelated women tell the internet that their pain was so bad they wanted to die (here and here). If moments of intense suffering were replaced by p-zombies, then these women would've just suddenly lost consciousness and wouldn't have told the internet about their experience. From their perspective, it would've look like this: as the condition progresses, the pain gets worse, and at some point, they lose consciousness, only to regain it when everything is already over. They wouldn't have experienced the intense pain that they reported to have experienced. Ditto for all PoWs who have experienced torture. That's a totally valid view as far as axiological views go, but for us to be in your proposed simulation, the Friendly AI must also share it. After all, we are imagining a situation where it goes on to perform a complicated scheme that depends on a lot of controversial assumptions. To me, that suggests that AI has so many resource
2avturchin
She will be unconscious, but still send messages about pain. Current LLMs can do it. Also, as it is simulation, there are recording of her previous messages or of a similar woman, so they can be copypasted. Her memories can be computed without actually putting her in pain.  Resurrection of the dead is the part of human value system. We need a completely non-human bliss, like hedonium, to escape this. Hedonium is not part of my reference class and thus not part of simulation argument.   Moreover, even creating new human is affected by this arguments. What if my children will suffer? So it is basically anti-natalist argument.  
1Dakara
So if I am understanding your proposal correctly, then a Friendly AI will make a woman unconscious during moments of intense suffering and then implant her memories of pain. Why would it do it though? Why not just remove the experience of pain entirely? In fact, why does Friendly AI seem so insistent on keeping billions of people in a state of false belief by planting false memories. That seems to me like a manipulation. Friendly AI could just reveal to the people in simulation the truth and let them decide if they want to stay in a simulation or move to the "real" world. I expect that at least some people (including me) would choose to move to a higher plain of reality if that was the case. Furthermore, why not just resurrect all these people into worlds with no suffering? Such worlds would also take up less computing power than our world so the Friendly AI doing the simulation would have another reason to pursue this option. Creation of new happy people also seems to be similarly valuable. After all, most arguments against creating new happy people would apply to resurrecting the dead. I would expect most people who oppose the creation of new happy people to oppose the Ressurection Simulation. But leaving that aside, I don't think we need to invoke hedonium here. Simulations full of happy, blissful people would be enough. For example, it is not obvious to me that resurrecting one person into our world is better than creating two happy people in a blissful world. I don't think that my value system is extremely weird, either. A person following a regular classical utilitarianism would probably arrive at the same conclusion. There is an even deeper issue. It might be the case that somehow, the proposed theory of personal identity fails and all the "resurrections" would just be creating new people. This would be really unpleasant considering that now it turns out that Friendly AI spent more resources to create less people who experience more suffering and less ha
2avturchin
  My point is that it is impossible to resurrect anyone (in this model) without him reliving his life again first, after that he obviously gets eternal blissful life in real (not simulated) world.  This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly.    It is a variant of Doomsday argument. This idea is even more controversial than simulation argument. There is no future with many people in it. Friendly AI can fight DA curse via simulations - by creating many people who do not know their real time position which can be one more argument for simulation, but it requires rather wired decision theory.  
1Dakara
Yup, I agree. This makes my case even stronger! Basically, if a Friendly AI has no issues with simulating conscious beings in general, then we have good reasons to expect it to simulate more observers in blissful worlds than in worlds like ours. If the Doomsday Argument tells us that Friendly AI didn't simulate more observers in blissful worlds than in worlds like ours, then that gives us even more reasons to think that we are not being simulated by a Friendly AI in the way that you have described.
1Satron
I suggest sending this as a comment under his article if you haven't already. I am similarly interested in his response.
3Satron
Here is Richard Carrier's response to avturchin's comment (for the courtesy of those reading this thread later):
2avturchin
2avturchin
I did. It is under moderation now. 

I would put more emphasis on this part:

Even the smartest people I know have a commendable tendency not to take certain ideas seriously.

Indeed, I think this tendency commendable and I do not take these ideas seriously. Like Puddleglum, I ignore and am untroubled by the whispers of evil spirits, even though I may not (yet) have any argument against them. I do not need one. Nor do I need to have an argument for ignoring them. Nor an argument for not looking for arguments. Gold Hat's line comes to mind:

“Arguments? We ain’t got no arguments! We don’t need no arguments! I don’t have to show you any stinking arguments!”

I can show you a vibe instead, if that helps, but it probably doesn't. Somehow, the Simulation Argument seems to me to not be doing enough work to get where it goes.

[ Note: I strongly agree with some parts of jbash's answer, and strongly disagree with other parts. ]

As I understand it, Bostrom's original argument, the one that got traction for being an actually-clever and thought-provoking discursive fork, goes as follows:

  1. Future humans in specific, will at least one of: [ die off early, run lots of high-fidelity simulations of our universe's history ["ancestor-simulations"], decide not to run such simulations ].

  2. If future humans run lots of high-fidelity ancestor-simulations, then most people who subjectively experience themselves as humans living early in a veridical human history, will in fact be living in non-base-reality simulations of such realities, run by posthumans.

  3. If one grants that our ancestors are likely to a] survive, and b] not elect to run vast numbers of ancestor-simulations [ both of which assumptions felt fairly reasonable back in the '00s, before AI doom and the breakdown of societal coordination became such nearly felt prospects ], then we are forced to conclude that we are more likely than not living in one such ancestor-simulation, run by future humans.

It's a valid and neat argument which breaks reality down into a few mutually-exclusive possibilities - all of which feel narratively strange - and forces you to pick your poison.

Since then, Bostrom and others have overextended, confused, and twisted this argument, in unwise attempts to turn it into some kind of all-encompassing anthropic theory. [ I Tweeted about this over the summer. ]

The valid, original version of the Simulation Hypothesis argument relies on the [plausible-seeming!] assumption that posthumans, in particular, will share our human interest in our species' history, in particular, and our penchant for mad science. As soon as your domain of discourse extends outside the class boundaries of "future humans", the Simulation Argument no longer says anything in particular about your anthropic situation. We have no corresponding idea what alien simulators would want, or why they would be interested in us.

Also, despite what Aynonymousprsn123 [and Bostrom!] have implied, the Simulation Hypothesis argument was never actually rooted in any assumptions about local physics. Changing our assumptions about such factors as [e.g.] the spatial infinity of our local universe, quantum decoherence, or a physical Landauer limit, doesn't have any implications for it. [ Unless you want to argue for a physical Landauer limit so restrictive it'd be infeasible for posthumans to run any ancestor-simulations at all. ]

So, while the Simulation Hypothesis argument can imply you're being simulated by posthumans, if and only if you strongly believe posthumans will both of [ a] not die, b] not elect against running lots of ancestor-simulations ], it can't prove you're being simulated in general. It's just not that powerful.

Future humans in specific, will at least one of: [ die off early, run lots of high-fidelity simulations of our universe's history ["ancestor-simulations"], decide not to run such simulations

Is there any specific reason the first option is "die off early" and not "be unable to run lots of high-fidelity simulations"? The latter encompasses the former as well as scenarios where future humans survive but for one reason or the other can't run these simulations.

I think a more general argument, in my opinion, would look like this: 

"Future humans will at leas... (read more)

1Lorec
Yes, I think that's a validly equivalent and more general classification. Although I'd reflect that "survive but lack the power or will to run lots of ancestor-simulations" didn't seem like a plausible-enough future to promote it to consideration, back in the '00s.

My own take on the simulation argument is that it's true, but has no implications, because you can narrow nothing down other than pure logic, so it would be a problem.

(Here, I'm focused on the most general version of the arguments I've seen).

Maybe. But what do you mean by, "you can narrow nothing down other than pure logic"?

I interpret the first part—"you can narrow nothing down"—to mean that the simulation argument doesn't help us make sense of reality. But I don't understand the second part: "other than pure logic." Can you please clarify this statement?

2Noosphere89
Basically, I'm stating that the only thing that the simulation hypothesis gives you is the tautologies, that is statements that are true in every world/model. More below: https://en.wikipedia.org/wiki/Tautology_(logic)

I don't see the phrase "self-indication assumption" in here anywhere.

Nothing is obviously wrong with it. I'm not sure what probability to assign it. Its sort of "out of sample". But it seems very plausible to me we are in a simulation. It is really hard to justify probabilities or even settle on them. But when I look inside myself the numbers that come to mind are 25-30%.

No one has refuted it, ever, in my books


Nor can you refute that my qualia experience of green is what you call red, but because every time I see (and subsequently refer to) my red is the same time you see your red, there is no incongruity to suggest any different. However I think entertaining such a theory would be a waste of time.

I see the simulation hypothesis as suffering from the same flaws as the Young Earth Theory: both are incompatible with Occums Razor, or to put it another way, adds unnecessary complexity to a theory of metaphysics without offering additional accuracy or better predicting power. The Young Earth Hypothesis says that fossils and geological phenomena only appear to be older than 6,000 years, but they were intentionally created that way (by the great Simulator in the sky?). This means it also fails to meet the important criteria of modern science: it can't be falsified.

To be able to falsify something means that a theory is valuable, because if it fails, then you've identified a gap between your map of something and the territory that you can correct. A theory becomes even more valuable if it predicts some counter-intuition or result which hereto none of our models or theories predict, yet repeated tests do not falsify it.

Simulation Hypothesis intrinsically means you cannot identify the gap between your map and the territory, since the territory is just another representation. Nor does it explicitly and specifically identify things which we would expect to be true but aren't: again, because everything would continue to appear as it always has been. So it offers not value there.

Simulation Hypothesis isn't taken seriously not because it can't be true - so when you see green I see red - but that you can predict no difference in my or your behavior from knowing this. So what?
 

Nor can you refute that my qualia experience of green is what you call red

But we can. This sort of “epiphenomenal spectrum inversion” is not possible in humans[1], because human color perception is functionally asymmetric (e.g. the “just noticeable difference” between shades of a hue is not invariant under hue rotation, nor is the shape of identified same-color regions or the size of “prototypical color” sub-regions).


  1. We can hypothesize aliens whose color perception works in such a way that allows for epiphenomenal spectrum inversion, but humans are

... (read more)
2CstineSublime
But that surely just describes the retina and the way light passes through the lens (which we can measure or at least make informed guesses based on the substances and reflectance/absorbtion involved)? How do you KNOW that my hue isn't rotated completely differently since you can't measure it - my experience of it? The wavelengths don't mean a thing.
2Said Achmiz
Absolutely not. What I am talking about has very little to do with “wavelengths”. Example: Consider an orange (that is, the actual fruit), which you have in your hand; and consider a photograph of that same orange, taken from the vantage point of your eye and then displayed on a screen which you hold in your other hand. The orange and the picture of the orange will both look orange (i.e. the color which we perceive as a hybrid of red and yellow), and furthermore they will appear to be the same orange hue. However, if you compare the spectral power distribution (i.e., which wavelengths are present, and at what total intensity) of the light incident upon your retina that was reflected from the orange, with the spectral power distribution of the light incident upon your retina that was emitted from the displayed picture of that same orange, you will find them to be almost entirely non-overlapping. (Specifically, the former SPD will be dominated by light in the ~590nm band, whereas the latter SPD will have almost no light of that wavelength.) And yet, the perceived color will be the same. Perceptual colors do not map directly to wavelengths of light.
3CstineSublime
I'm not sure what I'm meant to be convinced by in that Wikipedia article - can you quote the specific passage? I don't understand how that confirms you and I are experiencing the same thing we call orange. To put it another way, imagine a common device in Comedy of Errors: we are in a three-way conversation, and our mutual interlocutor mentions "Bob" and we both nod knowingly. However this doesn't mean that we are imagining "Bob" refers to the same person, I could be thinking of animator Bob Clampett, you could be thinking of animator Bob Mckimson. Our mutual interlocutor could say "Bob has a distinctive style" - now, assume there is nothing wrong with our hearing. We are getting the same sentence with the same syntax. Yet my mental representation of Bob and the visual style will be different to yours. In the same way that we could be shown the same calibrated computer screen which displays the same image of an orange, of a banana, we may appear to say "yep that orange is orange" "yep, that banana is a pale yellow" - but how do you know that my mental representation of orange isn't your purple. When ever I say "purple" I could be mentally experiencing your orange, in the same way that when I heard "Bob" I'm making reference to Clampett not Mckimson? I'll certainly change the analogy if you can explain to me what I'm missing... but I just don't understand.

For an argument against the sim hypothesis see https://lorenzopieri.com/sim_hypothesis/  or the full article https://philpapers.org/rec/PIETSA-6  (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization). 

In a nutshell:

0- Suppose by absurd that we are in a simulation. 
1- We are equally likely to be in one of the many simulations.
2- The vast majority of simulations are simple [see paper to understand why this is reasonable].
3- Therefore, we are very likely to be in a simple simulation.
4- Therefore, we should not expect to observe much complexity.
5- But we do observe complexity, therefore we are very unlikely to be a simulation. 

Humanity gets to choose whether or not we're in a simulation. If we collectively decide to be the kind of species that ever creates or allows the creation of ancestor simulations, we will presumably turn out to be simulations ourselves. If we want to not be simulations, the course is clear. (This is likely a very near-term decision. Population simulations are already happening, and our civilization hasn't really sorted out how to relate to simulated people.)

Alternatively, maybe reality is just large enough that the simulation/non-simulation distinction isn't really meaningful. Yudkowsky's "realityfluid" concept is an interesting take on simulation-identities. He goes into it in some depth both in the Ultimate Mega-Crossover and in Planecrash.

Bostrom's argument may be underappreciated. You might like Roman Yampolskiy's work if you're deeply interested in exploring the Simulation argument.

10 comments, sorted by Click to highlight new comments since:
[-][anonymous]*1510

This isn't an argument against the idea that we have many instantiations[1] in simulations, which I believe we do. My view is that, still, the most impact to be had is (modulo this) through the copies of me which are not in a simulation (where I can improve a very long future by reducing s-/x-risks), so those are the contexts which my decisions should be about effecting from within.

IIUC, this might be a common belief, but I'm not sure. I know at least a few other x-risk focused people believe this.

It's also more relevant for the question of "what choice helps the most beings"; if you feel existential dread over having many simulated instantiations, this may not help with it.

  1. ^

    If there are many copies of one, the question is not "which one am I really?", you basically become an abstract function choosing how to act for all of them at once.

I have to say, quila, I'm pleasantly surprised that your response above is both plausible and logically coherent—qualities I couldn't find in any of the Reddit responses. Thank you.

However, I have concerns and questions for you.

Most importantly, I worry that if we're currently in a simulation, physics and even logic could be entirely different from what they appear to be. If all our senses are illusory, why should our false map align with the territory outside the simulation? A story like your "Mutual Anthropic Capture" offers hope: a logically sound hypothesis in which our understanding of physics is true. But why should it be? Believing that a simulation exactly matches reality sounds to me like the privileging the hypothesis fallacy.

By the way, I'm also somewhat skeptical of a couple of your assumptions in Mutual Anthropic Capture. Still, I think it's a good idea overall, and some subtle modifications to the idea would probably make logically sound. I won't bother you about those small issues here, though; I'm more interested in your response to my concern above.

If we have no grasp on anything outside our virtualized reality, all is lost. Therefore I discard my attempts to control those possible worlds.

However, the simulation argument relies on reasoning. To go through requires a number of assumptions hold. Those in turn rely on: why would we be simulated? It seems to me the main reason is because we're near a point of high influence in original reality and they want to know what happened - the simulations then are effectively extremely high resolution memories. Therefore, thank those simulating us for the additional units of "existence", and focus on original reality where there's influence to be had; that's why alien or our future superintelligences would care what happened.

https://arxiv.org/pdf/1110.6437

Basically, don't freak out about simulations. It's not that different from the older concept "history is watching you". Intense, but not world shatteringly intense.

I think I understand your point. I agree with you: the simulation argument relies on the assumption that physics and logic are the same inside and outside the simulation. In my eyes, that means we may either accept the argument's conclusion or discard that assumption. I'm open to either. You seem to be, too—at least at first. Yet, you immediately avoid discarding the assumption for practical reasons:

If we have no grasp on anything outside our virtualized reality, all is lost.

I agree with this statement, and that's my fear. However, you don't seem to be bothered by the fact. Why not? The strangest thing is that I think you agree with my claim: "The simulation argument should increase our credence that our entire understanding of everything is flawed." Yet somehow, that doesn't frighten you. What do you see that I don't see? Practical concerns don't change the territory outside our false world.

Second:

It seems to me the main reason is because we're near a point of high influence in original reality and they want to know what happened - the simulations then are effectively extremely high resolution memories.

That's surely possible, but I can imagine hundreds of other stories. In most of those stories, altruism from within the simulation has no effect on those outside it. Even worse, is that there are some stories in which inflicting pain within a simulation is rewarded outside of it. Here's a possible hypothetical:

Imagine humans in base reality create friendly AI. To respect their past, the humans ask the AI to create tons of sims living in different eras. Since some historical info was lost to history, the sims are slightly different from base reality. Therefore, in each sim, there's a chance AI never becomes aligned. Accounting for this possibility, base reality humans decide to end sims in which AI becomes misaligned and replace those sims with paradise sims where everyone is happy.

In the above scenario, both total and average utilitarianism would recommend intentionally creating misaligned AI so that paradise ensues.

I'm sure you can craft even more plausible stories. 

My point is, even if our understanding of physics and logic is correct, I don't see why we ought to privilege the hypothesis that simulations are memories. I also don't see why we ought to privilege the idea that it's in our interest to increase utility within the simulation. Can you please clarify why you're so confident about these notions?

Thank you

We have to infer how reality works somehow.

I've been poking at the philosophy of math recently. It really seems like there's no way to conceive of a universe that is beyond the reach of logic except one that also can't support life. Classic posts include unreasonable effectiveness of mathematics, what numbers could not be, a few others. So then we need epistemology.

We can make all sorts of wacky nested simulations and any interesting ones, ones that can support organisms (that is, ones that are Turing complete), can also support processes for predicting outcomes in that universe, and those processes appear to necessarily need to do reasoning about what is "simple" in some sense in order to work. So that seems to hint that algorithmic information theory isn't crazy (unless I just hand waved over a dependency loop, which I totally might have done, it's midnight), which means that we can use the equivalence of Turing complete structures to assume we can infer things about the universe. Maybe not solononoff induction, but some form of empirical induction. And then we've justified ordinary reasoning about what's simple.

Okay, so we can reason normally about simplicity. What universes produce observers like us and arise from mathematically simple rules? Lots of them, but it seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations' simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other. Or, they produce us by base physics, and then we get instantiated again later to figure out what we did. Ancestor sims require very good outcomes which seem rare, so those branches are lower measure anyway, but also ancestor sims don't get to produce super ai separate from the original causal influence.

Point is, no, what's going on in the simulations is nearly entirely irrelevant. We're in base physics somewhere. Get your head out of the simulation clouds and choose what you do in base physics, not based on how it affects your simulators' opinion of the simulation's moral valence. Leave that sort of crazy stuff to friendly ai, you can't understand superintelligent simulators which we can't even get evidence exist besides plausible but very galaxy brain abstract arguments.

(Oh, might be relevant that I'm a halfer when making predictions, thirder when choosing actions - see anthropic decision theory for an intuition on that.)

Thank you, I feel inclined to accept that for now.

But I'm still not sure, and I'll have to think more about this response at some point.

Edit: I'm still on board with what you're generally saying, but I feel skeptical of one claim:

It seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations' simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other.

My intuition tells me there will probably be superior methods of gathering information about superintelligent aliens. To me, it seems like the most obvious reason to create sims would be to respect the past for some bizarre ethical reason, or for some weird kind of entertainment, or even to allow future aliens to temporarily live in a more primitive body. Or perhaps for a reason we have yet to understand.

I don't think any of these scenarios would really change the crux of your argument, but still, can you please justify your claim for my curiosity?

Sims are very cheap compared to space travel, and you need to know what you're dealing with in quite a lot of detail before you fly because you want to have mapped the entire space of possible negotiations in an absolutely ridiculous level of detail.

Sims built for this purpose would still be a lot lower detail than reality, but of course that would be indistinguishable from inside if the sim is designed properly. Maybe most kinds of things despawn in the sim when you look away, for example. Only objects which produce an ongoing computation that has influence on the resulting civ would need modeling in detail. Which I suspect would include every human on earth, due to small world effects, the internet, sensitive dependence on initial conditions, etc. Imagine how time travel movies imply the tiniest change can amplify - one needs enough detail to have a good map of that level of thing. Compare weather simulation.

Someone poor in Ghana might die and change the mood of someone working for ai training in Ghana, which subtly affects how the unfriendly AI that goes to space and affects alien civs is produced, or something. Or perhaps there's an uprising when they try to replace all human workers with robots. Modeling what you thought about now helps predict how good you'll be at the danceoff in your local town which affects the posts produced as training data on the public internet. Oh, come to think of it, where are we posting, and on what topic? Perhaps they needed to model your life in enough detail to have tight estimates of your posts, because those posts affect what goes on online.

But most of the argument for continuing to model humans seems to me to be the sensitive dependence on initial conditions, because it means you need an unintuitively high level of modeling detail in order to estimate what von Neumann probe wave is produced.

Still cheap - even in base reality earth right now is only taking up a little more energy than its tiny silhouette against the sun's energy output in all directions. A kardashev 2 civ would have no problem fuelling an optimized sim with a trillion trillion samples of possible aliens' origin processes. Probably superintelligent kardashev 1 even finds it quite cheap, could be less then earth's resources to do the entire sim including all parallel outcomes.

I should also add:

I'm pretty worried that we can't understand the universe "properly" even if we're in base physics! It's not yet clearly forbidden that the foundations of philosophy contain unanswerable questions, things where there's a true answer that affects our universe in ways that are not exposed in any way physically, and can only be referred to by theoretical reasoning; which then relies on how well our philosophy and logic foundations actually have the real universe as a possible referent. Even if they do, things could be annoying. In particular, one possible annoying hypothesis would be if the universe is in Turing machines, but is quantum - then in my opinion that's very weird but hey at least we have a set in which the universe is realizable. Real analysis and some related stuff gives us some idea things can be reasoned about from within a computation based understanding of structure, but which are philosphically-possibly-extant structures beyond computation, and whether true reality can contain "actual infinities" is a classic debate.

So sims are small potatoes, IMO. Annoying simulators that want to actively mess up our understandings are clearly possible but seem not particularly likely by models I believe right now; seems to me they'd rather just make minds within their own universe; sims are for pretending to be another timeline or universe to a mind you want to instantiate, whatever your reason for that pretense. If we can grab onto possible worlds well enough, and they aren't messing up our understanding on purpose, then we can reason about plausible base realities and find out we're primarily in a sim by making universe sims ourselves and discovering the easiest way to find ourselves is if we first simulate some alien civ or other.

But if we can't even in principle have a hypothesis space which relates meaningfully to what structures a universe could express, then phew, that's pretty much game over for trying to guess at tegmark 4 and who might simulate us in it or what other base physics was possible or exists physically in some sense.

My giving up on incomprehensible worlds is not a reassuring move, just an unavoidable one. Similar to accepting that if you die in 3 seconds, you can't do much about it. Hope you don't, btw.

But yeah currently seems to me that the majority of sim juice comes from civs who want to get to know the neighbors before they meet, so they can prepare the appropriate welcome mat (tone: cynical). Let's send an actualized preference for strong egalitarianism, yeah? (doesn't currently look likely that we will, would be a lot of changes from here before that became likely.)

(Also, hopefully everything I said works for either structural realism or mathematical universe. Structural realism without mathematical universe would be an example of the way things could be wacky in ways permanently beyond the reach of logic, while still living in a universe where logic mostly works.)

[-][anonymous]10

if we're currently in a simulation, physics and even logic could be entirely different from what they appear to be.

I have another obscure shortform about this! Physical vs metaphysical contingency, about what it would mean for metaphysics (e.g. logic) itself to have been different. (In the case of simulations, it could only be different in a way still capable of containing our metaphysics as a special case, like how in math a more expressive formal system can contain a less expressive one, but not the reverse)

I agree a metaphysically different base world is possible, but I'm not sure how to reason about it. (I think apparent metaphysical paradoxes are some evidence for it, though we might also just be temporarily confused about metaphysics)

Just physics being different is easier to imagine. For example, it could be that the base world is small, and it contains exactly one alien civilization running a simulation in which we appear to observe a large world. But if the base world is small, arguments for simulations which rely on the vastness of the world, like Bostrom's, would no longer hold. And at that point there doesn't seem much reason to expect it, at least for any individual small world.[1] Though it could also be that the base world is large and physically different, and we're in a simulation where we appear to observe a different large world.

Ultimately, while it could be true that there are 0 unsimulated copies of us, still we can have the best impact in the possibilities where there are at least one.[2]

By the way, I'm also somewhat skeptical of a couple of your assumptions in Mutual Anthropic Capture. Still, I think it's a good idea overall, and some subtle modifications to the idea would probably make logically sound. I won't bother you about those small issues here, though

I'm interested in what they are, I wouldn't be bothered (if you meant that literally). If you want you can reply about it here or on the original thread.

  1. ^

    If we're instead reasoning over the space of all possible mathematical worlds which are 'small' compared to what our observations look like they suggest, then we'd be reasoning about very many individual small worlds (which basically reintroduces the 'there are very many contexts which could choose to simulate us' premise). Some of those small math-worlds will probably run simulations (for example, if some have beings which want to manipulate "the most probable environment" of an AI in a larger mathematical world, to influence that larger math-world)

    In other words: "Conditional on {some singular 'real world' that is somehow special compared to merely mathematical worlds} being small, it probably doesn't contain simulations. But there are certainly many math-worlds that do, because the space of math-worlds is so vast (to the point that some small math-worlds would randomly contain a simulation as part of their starting condition)"

  2. ^

    And there's probably not anything we can do to change our situation in case of possibilities where we don't exist in base reality. Although I do think 'look for bugs' is something an aligned ASI would want to try, especially when considering that our physics apparently has some simple governing laws, i.e. may have a pretty short program length[3], and it's plausible for a process we'd describe with a short program length to naturally / randomly occur as a process of physical interaction in a much larger base world -- that is to say, there are plausible origins of a simulation which don't involve a superintelligent programmer ensuring there are no edge cases)

  3. ^

    (but no longer short when considering its very complex starting state? ig it could turn out that that itself is predicted by some simple rule)

Nothing is obviously wrong with it. I'm not sure what probability to assign it. Its sort of "out of sample". But it seems very plausible to me we are in a simulation. It is really hard to justify probabilities or even settle on them. But when I look inside myself the numbers that come to mind are 25-30%.

This is also obvious but Quantum Wave Function Collapse SURE DOES look like this universe is only being simulated at a certain fidelity. 

Curated and popular this week