nitrat665 comments on Astronomy, space exploration and the Great Filter - Less Wrong

23 Post author: JoshuaZ 19 April 2015 07:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 19 April 2015 10:54:49PM 3 points [-]

First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea - a huge waste of resources.

We haven't seen anything like evidence that our laws of physics are only approximations at all. If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.

The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.

And our simulating entities would be able to tell that someone was doing a deliberate experiment how?

The limits of optimal approximation appear to be linear in observer complexity - using output sensitive algorithms.

I'm not sure what you mean by this. Can you expand?

The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself.

Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.

Only up to a point. It is going to be for example very difficult to percolate up simulations from micro to milimeter for many issues, and the less detail in a simulation, the more likely that someone notices a statistical artifact in weakly simulated data.

We already can simulate entire planets using the tiny resources of today's machines. I myself have created several SOTA real-time planetary renderers back in the day.

Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.

Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don't explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn't really help explain the Great Filter much at all.

Comment author: nitrat665 20 April 2015 05:41:34PM *  3 points [-]

We haven't seen anything like evidence that our laws of physics are only approximations at all. If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.

It depends on what you consider a simulation. Game of Life-like cell automaton simulations are interesting in terms of having a small number of initial rules and being mathematically consistent. However, using them for large-scale project (for example, a whole planet populated with intelligent beings) would be really expensive in terms of computer power required. If the hypothetical simulators' resources are in any way limited then for purely economic reasons the majority of emulations would be of the other kind - the ones where stuff is approximated and all kinds of shortcuts are taken.

And our simulating entities would be able to tell that someone was doing a deliberate experiment how?

Very easily - because a scientist doing an experiment talks about doing it. If the simulated beings are trying to run LHC, one can emulate the beams, the detectors, the whole accelerator down to atoms - or one can generate a collision event profile for a given detector, stick a tracing program on the scientist that waits for the moment when the scientist says "Ah... here is our data coming up" and then display the distribution on the screen in front of the scientist. The second method is quite a few orders of magnitude cheaper in terms of computer power required, and the scientist in question sees the same picture in both cases.