Nick Bostrom's Simulation Argument hit the news recently when a physicist published a blog post about it:
No, we probably don’t live in a computer simulation
Some of the ensuing responses discussed the fidelity with which such a simulation would need to be run, in order to keep the population living within it guessing as to whether they were in a digital simulation, which is a topic that's been discussed before on LessWrong:
If a simulation can be not just run, but also loaded from previous saved states then edited, it should be possible for the simulation's Architect to start it running with low granularity, wait for some inhabitant to notice an anomaly, then rewind a little, use a more accurate but computing intensive algorithm in the relevant parts of the inhabitant's timecone and edit the saved state to include that additional detail, before setting the simulation running again and waiting for the next anomaly.
nigerweiss suggested:
construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior ("Project: Piss Off God"), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn't be practically computable in our universe.
but I'm wondering how easy that would be.
The problem would need to be physical (for example, make a net with labelled strands of differing lengths joining the nodes, then hang it from one corner), else humanity would have to be doing as much work as the simulation.
The solution should be discrete (for example, what are the labels on the strands making up the limiting path that prevents the lowest point from hanging further down)
The solution should be not just analytic, but also difficult to get via numerical analysis.
The problem should be scalable to very large sizes (so, for example, the net problem wouldn't work, because with large size nets making the strands sufficiently different in length that you could tell two close solutions apart would be a limiting factor)
And, ideally, the problem would be one that occurs (and is solved) naturally, such that humanity could just record data in multiple locations over a period of years, then later decide which examples of the problem to verify. (See this paper by Scott Aaronson: "NP-complete Problems and Physical Reality")
Any thoughts?
If it is the case that we are in a "perfect" simulation, I would consider that no different than being in a non-simulation. The concept of being "in a simulation" is useful only insofar as it predicts some future observation. Given the various multiverses that are likely to exist, any perfect simulation an agent might run is probably just duplicating a naturally-occurring mathematical object which, depending on your definitions, already "exists" in baseline reality.
The key question, then, is not whether some simulation of us exists (nearly guaranteed) but how likely we are to encounter an imperfection or interference that would differentiate the simulation from the stand-alone "perfect" universe. Once that happens, we are tied in to the world one level up and should be able to interact with it.
There's not much evidence about the likelihood of a simulation being imperfect. Maybe imperfect simulations are more common than perfect ones because they're more computationally tractable, but that's not a lot to go on.
I 100% agree that a "perfect simulation" and a non-simulation are essentially the same, noting Lumifer's comment that our programmer(s) are gods by another name in the case of simulation.
My comment is really about your second paragraph, how likely are we to see an imperfection? My reasoning about error propagation in an imperfect simulation would imply a fairly high probability of us seeing an error eventually. This is assuming that we are a near-perfect simulation of the universe "above" ours, with "perfect" simulation being ... (read more)