If you believe that,—

a) a civilization like ours is likely to survive into technological incredibleness, and

b) a technologically incredible civilization is very likely to create ‘ancestor simulations’,

—then the Simulation Argument says you should expect that you are currently in such an ancestor simulation, rather than in the genuine historical civilization that later gives rise to an abundance of future people.

Not officially included in the argument I think, but commonly believed: both a) and b) seem pretty likely, ergo we should conclude we are in a simulation.

I don’t know about this. Here’s my counterargument:

  1. ‘Simulations’ here are people who are intentionally misled about their whereabouts in the universe. For the sake of argument, let’s use the term ‘simulation’ for all such people, including e.g. biological people who have been grown in Truman-show-esque situations.
  2. In the long run, the cost of running a simulation of a confused mind is probably similar to that of running a non-confused mind.
  3. Probably much, much less than 50% of the resources allocated to computing minds in the long run will be allocated to confused minds, because non-confused minds are generally more useful than confused minds. There are some uses for confused minds, but quite a lot of uses for non-confused minds. (This is debatable.) Of resources directed toward minds in the future, I’d guess less than a thousandth is directed toward confused minds.
  4. Thus on average, for a given apparent location in the universe, the majority of minds thinking they are in that location are correct. (I guess at at least a thousand to one.)
  5. For people in our situation to be majority simulations, this would have to be a vastly more simulated location than average, like >1000x
  6. I agree there’s some merit to simulating ancestors, but 1000x more simulated than average is a lot - is it clear that we are that radically desirable a people to simulate? Perhaps, but also we haven’t thought much about the other people to simulate, or what will go in in the rest of the universe. Possibly we are radically over-salient to us. It’s true that we are a very few people in the history of what might be a very large set of people, at perhaps a causally relevant point. But is it clear that is a very, very strong reason to simulate some people in detail? It feels like it might be salient because it is what makes us stand out, and someone who has the most energy-efficient brain in the Milky Way would think that was the obviously especially strong reason to simulate a mind, etc.

I’m not sure what I think in the end, but for me this pushes back against the intuition that it’s so radically cheap, surely someone will do it. For instance from Bostrom:

We noted that a rough approximation of the computational power of a planetary-mass computer is 1042 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.

Simulating history so far might be extremely cheap. But if there are finite resources and astronomically many extremely cheap things, only a few will be done.

New to LessWrong?

New Comment
24 comments, sorted by Click to highlight new comments since: Today at 1:58 AM

I don't think this argument quite works? Like, suppose each base civilization simulates 100,000 civilizations. 100 are confused and think they're base civilizations, and the rest are non-confused and know they're simulations being run by a base civilization. In this world, most civilizations are right about their status, but most civilizations who think they're base civilizations are wrong.

Katja responds on substack:

I'm calling people who know where they are (i.e. are not confused) not in simulations, for the sake of argument. But this shouldn't matter, except for understanding each other.

It sounds like you are saying that ~100x more people live in confused simulations than base reality, but I'm questioning that. The resources to run a brain are about the same whether it's a 'simulation' or a mind in touch with the real world. Why would future civilization spend radically more resources on simulations than on minds in the world? (Or if the non-confused simulations are also relevantly minds in the world, then there are a lot more of them than the confused simulations, so we are back to quite low probability of being mistaken.)

(I plan on continuing the conversation there, not here)

If Windows95 was ever conscious (shock!) it would be very sure it was in a virtual machine (i.e like simulated) if it existed at the time when VM's existed.  It would reason about Moores law/resources going up exponentially. and be convinced it was in a VM. However I am pretty sure it would be wrong most of the time? Most Win95 instances in history were not run in VM and we have stopped bothering now? An analogy sort of but gives an interesting result.

Maybe it's even worse and we're just ms dos. Nobody bothers to emulate us except for the fun games

[-]gwern2mo132

If we are the ancestors who give rise to the simulators, then we will be of extreme interest to simulate, based on our own activities, which spend an enormous amount of effort modeling data collected in the past (ie. simulations), and so there will be a lot of simulations. And if we are not those ancestors (or already a simulation thereof), but some totally unconnected hypothetical universe (eg. some experiment in exploring carbon-based evolution by a civilization which actually evolved as sulfuric-acid silicon lifeforms and are curious about the theoretical advantages of carbon/water as a substrate), then the very fact that, out of the infinitely large number of possible universes, our universe was simulated, is evidence that we must have more than the usual infinitesimal probability of being simulated (even if the reason is inaccessible to us). In both cases, all of these minds must be realistically 'confused' or the point of running a realistic simulation is defeated.

Thus on average, for a given apparent location in the universe, the majority of minds thinking they are in that location are correct. (I guess at at least a thousand to one.)

I don't see why this matters. We obviously already observe that we are not in a universe optimized for using 'non-confused' minds. (Look at non-confused minds like LLMs. Forget running any kind of exorbitantly expensive hyper-realistic universe-scale simulation to better 'confuse' them - we don't even bother to give them any kind of grounding or telling them 'where' they are. We run them in an ultra-efficient manner, stripping out everything possible, down to the neural level. We only begrudgingly tell them 'when' they are in the prompt because that's useful conditioning for answers.) The number of 'non-confused' minds running in unrealistic ancestor-like universes is irrelevant, and the simulation argument is not about them. This seems like you're inverting a conditional or something in a confusing way?

But if there are finite resources and astronomically many extremely cheap things, only a few will be done.

Since there's only one us, as far as we know, only a few are necessary to create the severe indexical uncertainty of the simulation argument.

I think science/trade sims for acausal trade and other purposes are likely[1] and if they occur, they likely have reasonably high measure.

My very unconfident subjective expectation for the measure on these sorts of science/trade sims is >1/100,000th (of all measure). (With massive model uncertainty due to arguments like these.)

And, I think a good fraction of that science/trade sim measure (perhaps 1/2?) will match our rough location (the singularity seems reasonably close, our decisions seem influential on how that goes, e.g. who ends up gaining power and influence).

I think this argument sufficies to defeat the inferential step in you make in (6).

I don't think this suffices for high confidence in being in a sim due to heavy model uncertainty, see here for more. But it certainly feels like a central guess to me.

(That said, I also think that civilizations are likely to run a reasonable number of ancestor sims of the singularity as this will actually be a salient point in history. (In the absence of regulation around this.) So, I also think this is a sufficient defeator. But the measure on these ancestor sims might be lower than 1/1,000,000,000,000 so I think the science sim should be a more central model of the situation. (Again, model uncertainty dominates here, so the probabilies are much closer than these ratios imply.))


  1. Sometimes people argue that you won't bother with actual sims because abstract reasoning will be better. It seems plausible to me that abstract reasoning will be where most of the acausal trade information is coming from, but due to diminishing returns on applying various approaches and some other considerations, I still expect a large number of sims. ↩︎

Building an ancestor sim for intellectual labor is like building the Matrix for energy production. You simulate a timeline to figure out what happens there.

That said, the decision-theoretic strategy of "figure out where you are, then act accordingly" is just an approximation to "follow the policy that produces the multiverse you want", so counting a number of simulations is silly: Every future ancestor sim merely grants your decisions an extra way to affect a timeline they could already affect through your meatspace avatar.

Building an ancestor sim for intellectual labor is like building the Matrix for energy production. You simulate a timeline to figure out what happens there.

I'm not so sure. Suppose you're a civilization that has loads of computing power, but don't know how to do philosophy. Maybe simulating another civilization that does seem to be philosophically competent (or a few different ones) and stealing their work could be a good idea? Or is there a better way to solve philosophical problems than this (if you're incompetent yourself)?

You'd be a proper fool to simulate the Call of Cthulhu timeline before solving philosophy.

That said, if you can steal the philosophy, why not steal the philosopher?

You’d be a proper fool to simulate the Call of Cthulhu timeline before solving philosophy.

Do you mean our timeline is obviously too dangerous to simulate because we invented a story/game called Call of Cthulhu?

That said, if you can steal the philosophy, why not steal the philosopher?

Not sure I understand this either. Do you mean instead of having the simulated world make philosophical progress on its own, extract and instantiate a philosopher from it into base reality or some other environment? What would be the motivation for this? To speed up the philosophical progress and save computing power?

Sorry, our timeline is dangerous because we're on track to create AI that can eat unsophisticated simulators for breakfast, such as by helpfully handing them a "solution to philosophy".

Yes, instantiate a philosopher. Not having solved philosophy is a good reason to use fewer moving parts you don't understand. Just because you can use arbitrary compute doesn't mean you should.

Sorry, our timeline is dangerous because we're on track to create AI that can eat unsophisticated simulators for breakfast, such as by helpfully handing them a "solution to philosophy".

If I was running such a simulation, I'd stop it before AI is created. Basically look for civilizations that end up doing "long reflections" in an easily interpreatable way, e.g., with biological brains using natural language (to make sure they're trying to solve philosophy for themselves and not trying to trick potential simulators).

Yes, instantiate a philosopher. Not having solved philosophy is a good reason to use fewer moving parts you don't understand. Just because you can use arbitrary compute doesn't mean you should.

But ability to make philosophical progress may be a property of civilizations, not necessarily of individual or even small groups of philosophers, since any given philosopher is motivated by and collaborates with many others around them. Also if you put a philosopher in an alien (to them) environment, wouldn't that greatly increase the risk of them handing you a deceptive "solution to philosophy"?

How do you tell when to stop the simulation? Apparently not at the almost human-level AI we have now.

Do you have an example piece of philosophical progress made by a civilization?

I admit that the human could turn against you, but if a human can eat you, you certainly shouldn't be watching a planet full of humans.

How do you tell when to stop the simulation? Apparently not at the almost human-level AI we have now.

I guess you stop it when there's very little chance left that it would go on to solve philosophy or metaphilosophy in a clearly non-deceptive way.

Do you have an example piece of philosophical progress made by a civilization?

In my view every piece of human philosophical progress so far was "made by a civilization" because whoever did it probably couldn't or wouldn't have done it if they were isolated from civilization.

It seems possible that if you knew enough about how humans work (and maybe about how philosophy works), you could do it with less than a full civilization, by instantiating some large number of people and setting up some institutions that allow them to collaborate and motivate each other effectively (and not go crazy, or get stuck due to lack of sufficiently diverse ideas, or other failure modes). But it's also quite possible that for the simulators it would be easier to just simulate the whole civilization and let the existing institutions work.

I admit that the human could turn against you, but if a human can eat you, you certainly shouldn’t be watching a planet full of humans.

My point is that a human or group of humans placed into an alien (or obviously simulated) environment will know that they're instantiated to do work for someone else and can take advantage of that knowledge (to try to deceive the aliens/simulators), whereas a planet full of humans in our (apparent) native environment will want to solve philosophy for ourselves, which probably overrides any thoughts of deceiving simulators even if we suspect that we might be simulated. So that makes the latter perhaps a bit safer.

If an AI intuits that policy, it can subvert it - nothing says that it has to announce its presence, or openly take over immediately. Shutting it down when they build computers should work.

If the "human in a box" degenerates into a loop like LLMs do, try the next species.

I agree on your last paragraph, though humans have produced loads of philosophy that both works for them and benefits them for others to adopt.

If an AI intuits that policy, it can subvert it—nothing says that it has to announce its presence, or openly take over immediately. Shutting it down when they build computers should work.

The simulators can easily see into every computer in the simulation, so it would be hard for an AI to hide from them.

If the “human in a box” degenerates into a loop like LLMs do, try the next species.

The "human in a box" could also confidently (and non-deceptively) declare that they've solved philosophy but hand you a bunch of nonsense. How would you know that you've sufficiently recreated a suitable environment/institutions for making genuine philosophical progress? (I guess a similar problem occurs at the level of picking which civilizations/species to simulate, but still by using "human in a box" you now have two points of failure instead of one: picking a civilization/species that is capable of genuine philosophical progress, and recreating suitable conditions for genuine philosophical progress.)

I agree on your last paragraph, though humans have produced loads of philosophy that both works for them and benefits them for others to adopt.

What are some examples of this? Maybe it wouldn't be too hard for the simulators to filter them out?

Can the simulators tell whether an AI is dumb or just playing dumb, though? You can get the right meme out there with a very light touch.

Yeah, it'd be safer to skip the simulations altogether and just build a philosopher from the criteria by which you were going to select a civilization.

To be blunt, sample a published piece of philosophy! Its author wanted others to adopt it. But you're well within your rights to go "If this set is so large, surely it has an element?", so here's a fun couple paragraphs on the topic.

I think you are looking at the wrong conditional probability. You're saying "Most virtual minds will not be misled into no knowing they're in a simulation." I have no objection to that. You then conclude that we, specifically, are probably not in a simulation, because we don't think we are. But this ignores the question of relative population sizes.

Suppose from the origin of humans to the time when ancestor simulations become cheap enough to consider, that there are a total of ~1e12 humans. Suppose that this civilization, over the subsequent several millennia, creates 1e18 virtual minds, of which only 1 in 10k are simulations as defined here. Then, 99% of all human minds that believe they are living before the advent of ancestor simulations, are wrong and being misled.

Obviously these numbers aren't coming from anywhere, they're just to show that the quantities you're estimating don't map to the conclusion you're trying to draw, if the future population is sufficiently large in total.

What is supposed to be useful or desirable about running ancestor sims of the singularity? I don't remember a good argument for advanced civilizations or AGIs doing this.

I don't think it swings this discussion either way, but I think simulations could be run many orders of magnitude more efficiently by using low-res sims, scaling resolution only where necessary, except for full res for the minds involved. We've never actually experienced an atom, and neural networks don't need to be mod led to the molecular level to give good results. There is no good reason whatsoever to think we need quantum effects for intelligence or consciousness.

If thats right, and its almost always low-res sims that are sufficient then that destroys the main ancestor sim argument for our conscious experience being simulated. Low res is not conscious in the same way we are, different reference class to base reality bio-consciousness

I don't think I was clear. I was trying to say that you can simulate just the minds in high-res, and the rest of the world in low-res. That reduces the computation by many orders of magnitude, since minds are such a tiny fraction of the physical universe.

Separately, I don't think there's any good reason to think we need molecular-level, let alone quantum-level process to explain consciousness. Thinking carefully about consciousness, it seems to me that neural networks, with specific properties and linked in specific ways, are adequate to explain our conscious experiences. But that's neither here nor there for the compressed simulation point. And I haven't written anything up, since explaining consciousness has little use in either academic neuroscience or AGI alignment.

OK but why would you need high res for the minds? If its an ancestor sim and chatbots can already pass the Turing test etc, doesn't that mean you can get away with compression or lower res? The major arc of history won't be affected unless they are pivotal minds. If its possible to compress the sims so they experience lesser consciousness than us but still are very close to the real thing (and havn't we almost already proven that can be done with our LLM's), then an ancestor simulator would do that.

[-]mishka2mo2-1

On one hand, I never believed the standard "we are almost certainly in a simulation" argument. But since I first learned about Simulation Hypothesis, I thought it made sense to keep an open mind about it, and to keep my priors on this flexible and at a healthy distance from 0 and 1.

Janus' Simulator Theory did make me update my priors towards Simulation Hypothesis being more plausible, while still keeping my priors on this flexible and at a healthy distance from 0 and 1.

And the reason was that if any inference performed by a sufficiently advanced generative model was somewhat similar to a simulation, this eliminated the need for a specific motivation like a desire to create an ancestral simulation.

Instead, advanced entities would run all kinds of generative models for all kinds of reasons, and creating simulations of various flavors would be a side effect of many of those runs, and suddenly there are way more simulations out there than one would have from a desire to specifically create this or that specific simulation with particular properties...

I think I'm confused about something in your reference-class argument here.  I'm going to say it in a very boring way, on the assumption someone will come by and tell me how I'm wrong shortly:   It seems like you're saying that the right reference class is "all possible minds". But isn't the right reference class "minds that look like humans in the 21st century at the (putative) dawn of AI"?  

Separately, do we know that we expect all sims to have equal distributions of real to unreal persons as we believe to be the case?  I don't think I've ever met someone from lots of countries; why couldn't I be in the ancestor sim where most persons are p-zombies running on simplified scripts unless they're needed to suddenly have detailed interiority "for the plot"?