Nick Bostrom's Simulation Argument hit the news recently when a physicist published a blog post about it:
No, we probably don’t live in a computer simulation
Some of the ensuing responses discussed the fidelity with which such a simulation would need to be run, in order to keep the population living within it guessing as to whether they were in a digital simulation, which is a topic that's been discussed before on LessWrong:
If a simulation can be not just run, but also loaded from previous saved states then edited, it should be possible for the simulation's Architect to start it running with low granularity, wait for some inhabitant to notice an anomaly, then rewind a little, use a more accurate but computing intensive algorithm in the relevant parts of the inhabitant's timecone and edit the saved state to include that additional detail, before setting the simulation running again and waiting for the next anomaly.
nigerweiss suggested:
construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior ("Project: Piss Off God"), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn't be practically computable in our universe.
but I'm wondering how easy that would be.
The problem would need to be physical (for example, make a net with labelled strands of differing lengths joining the nodes, then hang it from one corner), else humanity would have to be doing as much work as the simulation.
The solution should be discrete (for example, what are the labels on the strands making up the limiting path that prevents the lowest point from hanging further down)
The solution should be not just analytic, but also difficult to get via numerical analysis.
The problem should be scalable to very large sizes (so, for example, the net problem wouldn't work, because with large size nets making the strands sufficiently different in length that you could tell two close solutions apart would be a limiting factor)
And, ideally, the problem would be one that occurs (and is solved) naturally, such that humanity could just record data in multiple locations over a period of years, then later decide which examples of the problem to verify. (See this paper by Scott Aaronson: "NP-complete Problems and Physical Reality")
Any thoughts?
An idea I keep coming back to that would imply we reject the idea of being in a simulation is the fact that the laws of physics remain the same no matter your reference point nor place in the universe.
You give the example of a conscious observer recognizing an anomaly, and the simulation runner rewinds time to fix this problem. By only re-running the simulation within that observer's time cone, the simulation may have strange new behavior at the edge of that time cone, propagating an error. I don't think that the error can be recovered so much as moved when dealing with lower resolution simulations.
It makes the most sense to me, that if we are in a simulation it be a "perfect" simulation in that the most foundational forces and quantum effects are simulated all the time, because they are all in a way interacting with each other all the time.
Companies writing programs to model and display large 3D environments in real time face a similar problem, where they only have limited resources. One work around they common use are "imposters"
A solar system sized simulation of a civilisation that has not made observable changes to anything outside our own solar system could take a lot of short cuts when generating the photons that arrive from outside. In particular, until a telescope or camera of particular resolution has been invented, would they need to bother generating thousands of years of such photons in more detail than could be captured by devices yet present?