Would we really understand a glitch if we saw one? At the most basic level, our best models of reality are strongly counter-intuitive. It's possible that internal observers will incorporate such findings into their own laws of physics. Engineering itself can be said to be applied munchkinry, such as enabling heavier than air flight. Never underestimate the ability of humans to get acclimatized to anything!
Uncertainty about the actual laws of physics in the parent universe, allowing for computation being so cheap they don't have to cut corners in simulations.
3)Retroactive editing of errors, with regular snapshots of the simulation being saved and then manually adjusted when deviations occur. Or simply deleting memories of inaccuracies from the minds of observers.
That reminds me of the extreme version of #3: Boltzmann Brain simulation. There's no reason to believe that the simulation has a time dimension, or is somehow calculating one instant to the next. Perhaps it's JUST a single instantaneous experience being simulated, and all of your memory and anticipation is just baked into the setup of yourself, and you're terminated just after this experience.
Suppose you were running a simulation, and it had some problems around object permanence, or colors not being quite constant (colors are surprisingly complicated to calculate since some of them depend on quantum effects), or other weird problems. What might you do to help that?
One answer might be to make the intelligences you are simulating ignore the types of errors that your system makes. And it turns out that we are blind to many changes around us!
Or conversely, if you are simulating an intelligence that happens to have change blindness, then you worry a lot less about fidelity in the areas that people mostly miss or ignore anyway.
The point is this: reality seems flawless because your brain assumes it is, and ignores cases where it isn't. Even when the changes are large, like a completely different person taking over halfway through a conversation, or numerous continuity errors in movies that almost all bounce right off of us. So I don't think that you can take amazing glitch free continuity as evidence that we're not in a simulation, since we may not see the bugs.
Doesn't that mean you have to do an awful lot of work to design everything in tremendous detail, and also fabricate the back story?
If the simulation were running on a substrate very different from the "reality" being simulated, then it might not have the same resource limitations we're used to, and it might not have any resource-conserving hacks in it.
If you have infinite computing power, and are in a position to simulate all of the physics we can access starting from the most ontologically fundamental rules, with no approximations, quantizations, or whatever, it's relatively easy to write something that won't glitch. To get the physics we seem to have, you might actually have to have "uncountably infinite computing power", but what's special about ℵ₀ anyhow?
Admittedly, I don't know if the entities that existed in such a universe would count as "biological". And if you keep going down that road you start to run into serious questions about what counts as a simulation and what counts as reality, and the next thing you know you're arguing with a bunch of dragonflies and losing.
On the other hand, such entities would be more plausibly able to run whatever random simulations struck their fancies than entities stuck in a world like ours. Anybody operating in our own physics would frankly have to be pretty crazy to waste resources on running this universe.
Or maybe it's actually a really crappy, complicated, buggy simulation, but the people running it detect glitches and stop/rewind every time one happens, and if they can't do that they just edit you so you don't notice it.
It would be evidence at all. Simple explanation: if we did observe a glitch, that would pretty clearly be evidence we were in a simulation. So by conservation of expected evidence, non-glitches are evidence against.
For all we know, many of the counter-intuitive aspects of modern physics could be bugs. I mean no one noticed for several hundred thousand years or so until this century. Maybe the reason that the speed of causality, and the maximum energy density are finite and constant because of limitations on whatever system the Universe runs on.
I think that there are many possible worlds where there are simulation bugs, and we just call them physics. Just as there are many possible worlds where those same effects are because of a completely different reason. That sounds to me like the case where the probabilities sum to zero.
Epistemic status: speculative fiction.
The problem of consciousness is such a glitch.
Its solution is that our consciousnesses are largely outside the simulation. The world we see is a virtual world that our consciousnesses (whatever those really are) have been placed in. The remarkable correspondences in structure between the physical brain and the whatever-it-is consciousness, as evidenced by the effects of brain lesions on consciousness, are usually taken as showing that consciousness is literally a physical process of the brain. But in [reminder: fictional] fact, it is because the brain has to closely mimic a lot of the structure of consciousness to be the effective VR interface that it is.
It'd be a pretty rookie mistake for the simulator-creators to leave in flaws AND allow agents inside the simulation to notice those flaws. We don't even allow videogame or scripted media agents to notice VERY glaring flaws in those very weak simulations.
It's hard to argue that anyone inside a simulation should be surprised by anything inside that simulation.
Taking the premise at face value for sake of argument.
You should be surprised just how many fields of study bottom out in something intractable to simulate or re-derive from first principals.
The substrate that all agents seem to run on seems conveniently obfuscated and difficult to understand or simulate ourselves - perhaps intentionally obfuscated to make it unclear what shortcuts are being taken or if the minds are running inside the simulation at all.
Likewise chemistry bottoms out in near-intractable quantum soup, the end result being that almost all related knowledge has to be experimentally determined and compiled into large tables of physical properties. Quantum mechanics does relatively to constrain this in practice; I think large molecules and heavy elements' properties could diverge significantly from what-we-would-predict if we could run large enough QM simulations without it being detectable.
It's awfully convenient most of us spend all our time running on autopilot and then coming up with post-hoc justifications of our behavior. Why we're scarcely more convincing than GPT explaining the actions of a game NPC. I wonder why we're like that... (see point 1).
I'm sure folks could come up with other examples. It's kind of an odd change of pace how science keeps running into bizarre smokescreens everywhere we look after the progress seen in the last few centuries. How many oddities are hiding just a little deeper?
I don't personally find the above persuasive on net, but it is the first tree I'd go barking up if I was giving that hypothesis further consideration.
I'm sure I've missed at least a few.
For those interested in further reading on the subject, we have a list of posts, sorted from oldest to newest, tagged as Simulation Hypothesis on LessWrong.
I have also added the tag to this question.
People do seem to report phenomena which look like they might be simulation-induced glitches.
For example, whenever people report phenomena which look like "Jungian synchronicity" then, on one hand, we might want to keep an open mind and not necessarily jump to a conclusion that we are seeing "more synchronicity than would be natural under a non-simulation assumption".
It's not that easy to distinguish between effects induced by human psychology and observational biases and effects which exist on their own.
But on the other hand, if one wants to implement a simulation, one wants to save computational resources and to compute certain shared things "just once", and if one is willing to allow higher level of coincidences than normal, one can save tons of computations.
It might be that "glaring" and "obvious" bugs are mostly being fixed, but "subtle bugs" (like "too much synchronicity" due to shared computations) might remain...
Assume we're in a simulation and know it. Should we be surprised by how flawless it seems? We (almost) never encounter situations where we feel like something's off (like "oh, what just happened is the kind of thing we should expect to happen in a simulation rather than in an original biological universe").[1] Or is there any good reason to assume that, in a simulation like the one we might be in, it is normal for us not to observe any obvious bug?
Of course, this is only one of the many considerations we should have in mind while assessing the likelihood that we are in a simulation. I just happen to wonder about this one, right now.
Obviously, if we're in a simulation, we don't know what original biological worlds look like, but we can probably make some guesses regarding what generally differs between these and simulations. For example, say I enter an empty room, and objects "magically" appear in it as I walk through it. This has fierce simulation kinda vibes.