Implications of the Theory of Universal Intelligence
If you hold the AIXI theory for universal intelligence to be correct; that it is a useful model for general intelligence at the quantitative limits, then you should take the Simulation Argument seriously.
AIXI shows us the structure of universal intelligence as computation approaches infinity. Imagine that we had an infinite or near-infinite Turing Machine. There then exists a relatively simple 'brute force' optimal algorithm for universal intelligence.
Armed with such massive computation, we could just take all of our current observational data and then use a particular weighted search through the subspace of all possible programs that correctly predict this sequence (in this case all the data we have accumulated to date about our small observable slice of the universe). AIXI in raw form is not computable (because of the halting problem), but the slightly modified time limited version is, and this is still universal and optimal.
The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.
AIXI’s mechanics, based on Solomonoff Induction, bias against complex programs with an exponential falloff ( 2^-l(p) ), a mechanism similar to the principle of Occam’s Razor. The bias against longer (and thus more complex) programs, lends a strong support to the goal of String Theorists, who are attempting to find a simple, shorter program that can unify all current physical theories into a single compact description of our universe. We must note that to date, efforts towards this admirable (and well-justified) goal have not born fruit. We may actually find that the simplest algorithm that explains our universe is more ad-hoc and complex than we would desire it to be. But leaving that aside, imagine that there is some relatively simple program that concisely explains our universe.
If we look at the history of the universe to date, from the Big Bang to our current moment in time, there appears to be a clear local telic evolutionary arrow towards greater X, where X is sometimes described as or associated with: extropy, complexity, life, intelligence, computation, etc etc. Its also fairly clear that X (however quantified) is an exponential function of time. Moore’s Law is a specific example of this greater pattern.
This leads to a reasonable inductive assumption, let us call it the reasonable assumption of progress: local extropy will continue to increase exponentially for the foreseeable future, and thus so will intelligence and computation (both physical computational resources and algorithmic efficiency). The reasonable assumption of progress appears to be a universal trend, a fundamental emergent property of our physics.
Simulations
If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.
As our future descendants expand in computational resources and intelligence, they will approach the limits of universal intelligence. AIXI says that any such powerful universal intelligence, no matter what its goals or motivations, will create many simulations which effectively are pocket universes.
The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes - complete with all the entailed trappings such as conscious simulated entities.
The reasonable assumption of progress modifies our big-picture view of cosmology and the predicted history and future of the universe. A compact physical theory of our universe (or multiverse), when run forward on a sufficient Universal Turing Machine, will lead not to one single universe/multiverse, but an entire ensemble of such multi-verses embedded within each other in something like a hierarchy of Matryoshka dolls.
The number of possible levels of embedding and the branching factor at each step can be derived from physics itself, and although such derivations are preliminary and necessarily involve some significant unknowns (mainly related to the final physical limits of computation), suffice to say that we have sufficient evidence to believe that the branching factor is absolutely massive, and many levels of simulation embedding are possible.
Some seem to have an intrinsic bias against the idea bases solely on its strangeness.
Another common mistake stems from the anthropomorphic bias: people tend to image the simulators as future versions of themselves.
The space of potential future minds is vast, and it is a failure of imagination on our part to assume that our descendants will be similar to us in details, especially when we have specific reasons to conclude that they will be vastly more complex.
Asking whether future intelligences will run simulations for entertainment or other purposes are not the right questions, not even the right mode of thought. They may, they may not, it is difficult to predict future goal systems. But those aren’t important questions anyway, as all universe intelligences will ‘run’ simulations, simply because that precisely is the core nature of intelligence itself. As intelligence expands exponentially into the future, the simulations expand in quantity and fidelity.
The Assemble of Multiverses
Some critics of the SA rationalize their way out by advancing a position of ignorance concerning the set of possible external universes our simulation may be embedded within. The reasoning then concludes that since this set is essentially unknown, infinite and uniformly distributed, that the SA as such thus tells us nothing. These assumptions do not hold water.
Imagine our physical universe, and its minimal program encoding, as a point in a higher multi-dimensional space. The entire aim of physics in a sense is related to AIXI itself: through physics we are searching for the simplest program that can consistently explain our observable universe. As noted earlier, the SA then falls out naturally, because it appears that any universe of our type when ran forward necessarily leads to a vast fractal hierarchy of embedded simulated universes.
At the apex is the base level of reality and all the other simulated universes below it correspond to slightly different points in the space of all potential universes - as they are all slight approximations of the original. But would other points in the space of universe-generating programs also generate observed universes like our own?
We know that the fundamental constants in the current physics are apparently well-tuned for life, thus our physics is a lone point in the topological space supporting complex life: even just tiny displacements in any direction result in lifeless universes. The topological space around our physics is thus sparse for life/complexity/extropy. There may be other topological hotspots, and if you go far enough in some direction you will necessarily find other universes in Tegmark’s Ultimate Ensemble that support life. However, AIXI tells us that intelligences in those universes will simulate universes similar to their own, and thus nothing like our universe.
On the other hand we can expect our universe to be slightly different from its parent due to the constraints of simulation, and we may even eventually be able to discover evidence of the approximation itself. There are some tentative hints from the long-standing failure to find a GUT of physics, and perhaps in the future we may find our universe is an ad-hoc approximation of a simpler (but more computationally expensive) GUT theory in the parent universe.
Alien Dreams
Our Milky Way galaxy is vast and old, consisting of hundreds of billions of stars, some of which are more than 13 billion years old, more than three times older than our sun. We have direct evidence of technological civilization developing in 4 billion years from simple protozoans, but it is difficult to generalize past this single example. However, we do now have mounting evidence that planets are common, the biological precursors to life are probably common, simple life may even have had a historical presence on mars, and all signs are mounting to support the principle of mediocrity: that our solar system is not a precious gem, but is in fact a typical random sample.
If the evidence for the mediocrity principle continues to mount, it provides a further strong support for the Simulation Argument. If we are not the first technological civilization to have arisen, then technological civilization arose and achieved Singularity long ago, and we are thus astronomically more likely to be in an alien rather than posthuman simulation.
What does this change?
The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent). If we discover that we are unlikely to be the first technological Singularity, we should assume AHS and AFS dominate. For reasons beyond this scope, I imagine that the AFS set will outnumber the AHS set.
Historical simulations would aim for historical fidelity, but future simulations would aim for fidelity to a 'what-if' scenario, considering some hypothetical action the alien simulating civilization could take. In this scenario, the first civilization to reach technological Singularity in the galaxy would spread out, gather knowledge about the entire galaxy, and create a massive number of simulations. It would use these in the same way that all universal intelligences do: to consider the future implications of potential actions.
What kinds of actions?
The first-born civilization would presumably encounter many planets that already harbor life in various stages, along with planets that could potentially harbor life. It would use forward simulations to predict the final outcome of future civilizations developing on these worlds. It would then rate them according to some ethical/utilitarian theory (we don't even need to speculate on the criteria), and it would consider and evaluate potential interventions to change the future historical trajectory of that world: removing undesirable future civilizations, pushing other worlds towards desirable future outcomes, and so on.
At the moment its hard to assign apriori weighting to future vs historical simulation possibilities, but the apparent age of the galaxy compared to the relative youth of our sun is a tentative hint that we live in a future simulation, and thus that our history has potentially been altered.
So from searching around, it looks like Roko was cosmically censored or something on this site. I don't know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation - but again I wasn't here so I don't know much of anything about Roko or his posts.
we have sent robot probes to only a handful of locations in our solar system, a far cry from "most of the planets" unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars - it may have had simple life on the past. We don't have enough observational data yet. Also, there may be life on europa or titan. I'm not holding my breath, but its worth mentioning.
Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity - the evidence for the latter is quite strong.
Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution.
We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average.
So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, we see life arising on 1 body out of a few dozen, with the possibility of that being 2 or 3 out of a few dozen (mars, europa, titan still have some small probability).
Actually no, I do not find the cosmic scale computer scenarios of Stross, Moravec et al to be realistic. Actually I find them to be about as realistic as our descendants dismantling the universe to build Babbage's Difference Engines or giant steam clocks. But that analogy isn't very telling.
If you look at what physics tells you about the fundamentals of computation, you can derive surprisingly powerful invariant predictions about future evolution with knowledge of just a few simple principles:
So armed with this knowledge, you can determine apriori that future computational hyperintelligences are highly unlikely to ever get to planetary size. They will be small, possibly even collapsing into singularities or exotic matter in final form. They will necessarily have to get smaller to become more efficient and more intelligent. This isn't something one has a choice about: big is slow and dumb, small is fast and smart.
Very roughly, I expect that a full-blown runaway Singularity on earth may end up capturing a big chunk of the available solar energy (although perhaps less than the biosphere captures, as fusion or more exotic potentials exist), but would only ever end up needing a small fraction of earth's mass: probably less than humans currently use. And from thermodynamics, we know maximum efficiency is reached operating in the range of earth's ambient temperature, and that would be something of a speed constraint.
Make no mistake, it certainly does, and this just a matter of fact - unless one wants to argue definitions.
The computer you are using right now was created first in an approximate simulation in a mammalian cortex, which was later promoted to approximate simulations in computer models, until eventually it was simulated in a very detailed near molecular/quantum level simulation, and then emulated (perfect simulation) through numerous physical prototypes.
Literally everything around you was created through simulation in some form. You can't create anything without simulation - thought itself is a form of simulation.
If you are hard set against computationalism, its probably not worth my energy to get into it (I assumed it is a given), but just to show my perspective a little:
Simulations of consciousness will create consciousness when we succeed in creating AGI's that are as intelligent as humans and are objectively indistinguishable. At the moment we don't understand our own brain and mechanisms of intelligence in enough detail to simulate them, and we don't yet have enough computational power to discover those mechanisms through brute evolutionary search. But that will change pretty soon.
Keep in mind that your consciousness - the essence of your intelligence - is itself is a simulation, nothing more, nothing less.
Not at all. It requires space of only N plus whatever each program uses for runtime. You are thinking of time resources - that does scale exponentially with N. But no hyperintelligence will use pure AIXI - they will use universal hierarchical approximations (mammalian cortex already does something like this) which have fantastically better scaling. But hold that thought, because your next line of argument brings us (indirectly) to an important agreement. .
perfect optimal deterministic intelligence (absolute deterministic 100% future knowledge of everything) requires a computer with at least as much mass as the system you want to simulate, and it provides an exponential time brute force algorithm to find the ultimate minimal program to perfectly simulate said system. That program will essentially be the ultimate theory of physics. But you only need to find that program once, and then forever after that you can in theory simulate anything in linear time with a big enough quantum computer.
But you can only approach that ultimate, so if you want absolute 100% accurate knowledge about how a physical system will evolve, you need to make the physical system itself. We already know this and use this throughout engineering.
First we create things in approximate simulations inside our mammalian cortices, and we create and discard a vast number of potential ideas, the best of which we simulate in ever more detail in computers, until eventually we actually physically create them and test those samples.
I think this is very a strong further argument that future hyper-intelligences will not go around turning all of the universe into computronium. Not only would that be unnecessary and ineffecient, but it would destroy valuable information: they will want to preserve as much of the interesting stuff in the galaxy as possible.
But they will probably convert little chunks of dead matter here and there into hyperintelligences and use those to run countless approximate simulations (that is to say - hyperthought) of the interesting stuff they find. (such as worlds with life)
Roko wasn't censored, he deleted everything he'd ever posted. I've independently confirmed this via contact with him outside LW.