I feel much the same about this post as I did about Roko's Final Post. It's imaginative, it's original, it has an internal logic that manages to range from metaphysics to cosmology; it's good to have some crazy-bold big-picture thinking like this in the public domain; but it's still wrong, wrong, wrong. It's an artefact of its time rather than a glimpse of reality. The reason it's nonetheless interesting is that it's an attempt to grasp aspects of reality which are not yet understood in its time - and this is also why I can't prove it to be "wrong" in a deductive way. Instead, I can only oppose my postulates to the author's, and argue that mine make more sense.
First I want to give a historical example of human minds probing the implications of things new and unknown, which in a later time became familiar and known. The realization that the other planets were worlds like Earth, a realization we might date from Galileo forwards, opened the human imagination to the idea of other worlds in the sky. People began to ask themselves: what's on those other worlds, is there life, what's it like; what's the big picture, the logic of the situation. In the present day, when robot pro...
The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.
That's an interesting point! At least, it's more interesting than Tipler's way of arriving at that conclusion.
If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.
See my response to the claim that the anthropic argument suggests it is highly improbable that you would find yourself to be a hum...
The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent).
What these categories meant was not clear to me on first reading.
I currently understand AFS as something like aliens finding earlier [humanity[ and trying to predict what we will do. AHS would be the result of Aliens interacting with a more mature humanity and trying to deduce particulars about our origin, perhaps for use in an AFS.
If I have that right, PFS migh...
If you absolutely have to summarize the forbidden topic at least rot13 it and preface it with an appropriate warning.
I have a question. What does it mean for AIXI to be the optimal time bounded AI? If it's so great, why do people still bother with ANNs and SVNs and SOMs and KNNs and TLAs and T&As? My understanding of it is rather cloudy (as is my understanding of all but the last two of the above), so I'd appreciate clarifaction.
First of all, AIXI isn't actually "the optimal time bounded AI". What AIXI is "optimal" for is coming to correct conclusions when given the smallest amount of data, and by "optimal" it means "no other program does better than AIXI in at least one possible world without also doing worse in another".
Furthermore AIXI itself uses Solomonoff induction directly, and Solomonoff induction is uncomputable. (It can be approximated, though.)
AIXItl is the time-limited version if AIXI, but it amounts to "test all the programs that you can, find the best one, and use that" - and it's only "optimal" when compared against the programs that it can test, so it's not actually practical to use, either.
(At least, that's what I could gather from reading the PDF of the paper on AIXI. Could someone who knows what they're talking about correct any mistakes?)
because the runtime complexity of AIXI is astronomically larger than the universe is.
'Astronomically'? That's the first time I've seen that superlative inadequate for the job.
Memetic hazard warning. Decode first part first.
ROT13: Ebxb ceboyrz. Qb abg ernq hayrff lbh nyernql xabj nobhg gur Ebxb ceboyrz. Vs lbh qb xabj nqqvgvbany qnatre vf rkgerzryl hayvxryl.
ROT13: Nal cbffvoyr nethzrag sbe jul gur qnatre vf erny vf nyfb fbzrguvat gung znxrf gur qnatre zber erny. Zbfg crbcyr nera'g va qnatre va gur svefg cynpr orpnhfr gur qnatre qbrfa'g znxr frafr gb gurz. Fhccylvat gurz jvgu nethzragf jbhyq or pbhagrecebqhpgvir orpnhfr rvgure gurl fgvyy qba'g haqrefgnaq naq lbh evfx gurz fcernqvat gur nethzragf (va gung ertneq lbh unir cebir...
I feel much the same about this post as I did about Roko's Final Post. It's imaginative, it's original, it has an internal logic that manages to range from metaphysics to cosmology; it's good to have some crazy-bold big-picture thinking like this in the public domain; but it's still wrong, wrong, wrong. It's an artefact of its time rather than a glimpse of reality. The reason it's nonetheless interesting is that it's an attempt to grasp aspects of reality which are not yet understood in its time - and this is also why I can't prove it to be "wrong" in a deductive way. Instead, I can only oppose my postulates to the author's, and argue that mine make more sense.
First I want to give a historical example of human minds probing the implications of things new and unknown, which in a later time became familiar and known. The realization that the other planets were worlds like Earth, a realization we might date from Galileo forwards, opened the human imagination to the idea of other worlds in the sky. People began to ask themselves: what's on those other worlds, is there life, what's it like; what's the big picture, the logic of the situation. In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset of earlier centuries. Earthbound minds, knowing only the one planet, and seeing it to be inhabited, naturally thought of other worlds as inhabited too. Even before 20th-century science fiction, there was an obscure literature of speculation about the alien humanities living on the other planets, how their character might reflect their circumstance, and so forth. It may all seem strange, arbitrary, and even childish now, but it was a way of thinking which was natural to its time.
So, what is the aspect of reality, not yet understood in its time, which makes this article possible, in the same way that the knowledge that there were other worlds, nearby in the sky, made it possible to speculate about life on those worlds? There's obviously a bit of metaphysics at work in this essay, regarding the relationship between simulation and reality, metaphysics which is very zeitgeisty and not yet understood, and it's where I will focus my criticism subsequently.
But I would say that the shocking knowledge specific to our own time, that supplied the canvas on which a cosmology like this can be painted, is the realization that the matter of the universe could be used technologically, on a cosmic scale. I remember the shock of reading Stross's Accelerando and realizing that the planet Mercury really could be dismantled and turned into a cloud of computational elements. The abstract idea of astronomical bodies being turned into giant computers had been known to me for twenty years, but it was still shocking to realize viscerally that it was already manifestly a material possibility, right here in the reality where I live.
Stross's Mercury gets turned into a cloud of nanocomputers, and it might be argued that this is still vaporware, with many fundamental problems to be solved before it can confidently be said to be possible; but just think of Mercury being turned into a quadrillion Athlon processors, then, orbiting the sun. That would require a titanic industrial enterprise on the dark side of Mercury, and many engineering problems would have to be solved; but we do already know how to mine, how to fabricate chips, how to travel through space. This modified version of Stross's scenario serves as my proof of concept for the idea of dismantling a planet and turning it into a computer (or a network of computers).
So, to repeat, the shocking discovery is the possibility of megascale (astronomical) engineering, with the construction of megascale computers and computer networks on a trans-solar scale being especially interesting and challenging. It appears to be materially possible for whole solar systems to be turned into computing devices, which could then communicate across interstellar distances and operate for geological periods of time. It's the further idea that this is the destiny of the universe - the galaxies to be turned into giant Internets - which provides the canvas for cosmo-computational speculation such as we see above.
Various reactions to this possibility exist. Some people embrace it because they have experienced the freedom and power of computation in the present, and they think that a whole universe turned to organized computation implies so much freedom and power that it transcends any previous concept of utopia. Some people will reject it as insanity - they just can't believe that anything like that could be possible. Some people will offer a more grudging, lukewarm rejection - sure it's possible, but do you really think we should do that; do you really think a wise, superior alien race would want to eat the universe; in their wisdom, wouldn't they know that growth isn't so great - etc. I don't believe the argument that technological civilizations will avoid doing this as a rule, out of a wise embrace of limits; but the idea of a universe transformed into "computronium", and especially the idea that any sufficiently advanced civilization will obviously do this, has a manic uniformity about it which makes me suspicious. However, I cannot deny that the vision of robot fleets traveling the galaxy and making Dyson spheres does appear to be a material and technological possibility.
So much for the analysis of where we stand intellectually - what we know, what we don't know, what we are now able to see as possible but do not know to be actual, likely, or necessary. What do I think of this particular vision of how all that computation will be used? I'm going to start with one of my competing postulates, which provides me with a major reason why I think Jacob's reasoning is radically wrong. Unfortunately it's a postulate which is not just at odds with his thinking, but with much of the thinking on this site; so be it. It simply is the postulate that simulation does not create things. Simulations of consciousness do not create consciousness, simulations of universes do not create subjectively inhabited universes. Using Jacob's terminology, the postulate is that consciousness is strictly a phenomenon occurring at the "base level of reality". You could have a brain in a vat wired up to a simulation within a simulation, in which case it might be experiencing events at two removes from the physical base; but there won't be any experience happening, there won't be anyone home, unless you have the right sort of physical process happening. Abstract computation is not enough.
OK, that's my main reason for dissenting from this argument, but that's definitely a minority opinion here. However, I can offer a few other considerations which affect its plausibility. Jacob writes:
Imagine that we had an infinite or near-infinite Turing Machine.
But we don't, nor does anyone living in a universe with physics like this one. There is a cosmological horizon which bounds the amount of bits available, there is a cosmological evolution which bounds the amount of time available. Just enumerating all programs of length n requires memory resources exponential in n; actually executing them in turn, according to the AIXI algorithm, will be even more computationally intensive. The number of operations which can be executed in our future light-cone is actually not that big, when we start looking at such exponentials of exponentials. This sort of universe isn't even big enough to simulate all possible stars.
The implication seems to be that if our existence has been coughed up by an AIXI-like brute-force simulation occurring in a universe whose base-level physics is like the physics we see (let's ignore for the moment my skepticism about functionalism), we can't be living in a simulation of base-level physics - certainly not a base-level simulation of a whole universe. That is way too big a program to ever be encountered in a brute-force search of program space, occurring in a universe this small. We must be living in some sketchy, partial, approximate virtual reality, big enough to create these appearances and not much else.
If we suppose that the true base-level physics of the ultimate reality might be quite different to that in our simulated universe, then this counterargument doesn't work - but in that case, we are no longer talking about "ancestor simulations", we are just talking about brute-force calculations occurring in a possible universe of completely unknown physics. In fact, although Jacob proposes that a universe like ours, run forward, should produce simulations of itself, the argument here leads in the other direction: whatever the base-level physics of reality, it isn't the physics of the standard model and the big-bang cosmology, because that universe isn't big enough to generically produce such simulations.
I will repeat my contention that I don't believe in functionalism/simulationism anyway, but even if one adopts that premise, there needs to be a lot more thought about the sorts of universes one thinks exist in the multiverse, and about the "demographics" of the computations occurring in them. This argument from AIXI would be neat if it worked, because AIXI's optimality suggests it should be showing up everywhere that Vast computation occurs, and its universality suggests that the same pocket universes should be showing up wherever it runs on a Vast scale. But the conditions for Vast enough computation are not automatically realized, not even in a universe like the one that real-world physics postulates; so one would need to ask oneself, what sort of possible worlds do contains sufficiently Vast computation, and how common in the multiverse are they, and how often will their Vast resources actually get used in a brute-force way.
I feel much the same about this post as I did about Roko's Final Post.
So from searching around, it looks like Roko was cosmically censored or something on this site. I don't know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation - but again I wasn't here so I don't know much of anything about Roko or his posts.
...In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset
Implications of the Theory of Universal Intelligence
If you hold the AIXI theory for universal intelligence to be correct; that it is a useful model for general intelligence at the quantitative limits, then you should take the Simulation Argument seriously.
AIXI shows us the structure of universal intelligence as computation approaches infinity. Imagine that we had an infinite or near-infinite Turing Machine. There then exists a relatively simple 'brute force' optimal algorithm for universal intelligence.
Armed with such massive computation, we could just take all of our current observational data and then use a particular weighted search through the subspace of all possible programs that correctly predict this sequence (in this case all the data we have accumulated to date about our small observable slice of the universe). AIXI in raw form is not computable (because of the halting problem), but the slightly modified time limited version is, and this is still universal and optimal.
The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.
AIXI’s mechanics, based on Solomonoff Induction, bias against complex programs with an exponential falloff ( 2^-l(p) ), a mechanism similar to the principle of Occam’s Razor. The bias against longer (and thus more complex) programs, lends a strong support to the goal of String Theorists, who are attempting to find a simple, shorter program that can unify all current physical theories into a single compact description of our universe. We must note that to date, efforts towards this admirable (and well-justified) goal have not born fruit. We may actually find that the simplest algorithm that explains our universe is more ad-hoc and complex than we would desire it to be. But leaving that aside, imagine that there is some relatively simple program that concisely explains our universe.
If we look at the history of the universe to date, from the Big Bang to our current moment in time, there appears to be a clear local telic evolutionary arrow towards greater X, where X is sometimes described as or associated with: extropy, complexity, life, intelligence, computation, etc etc. Its also fairly clear that X (however quantified) is an exponential function of time. Moore’s Law is a specific example of this greater pattern.
This leads to a reasonable inductive assumption, let us call it the reasonable assumption of progress: local extropy will continue to increase exponentially for the foreseeable future, and thus so will intelligence and computation (both physical computational resources and algorithmic efficiency). The reasonable assumption of progress appears to be a universal trend, a fundamental emergent property of our physics.
Simulations
If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.
As our future descendants expand in computational resources and intelligence, they will approach the limits of universal intelligence. AIXI says that any such powerful universal intelligence, no matter what its goals or motivations, will create many simulations which effectively are pocket universes.
The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes - complete with all the entailed trappings such as conscious simulated entities.
The reasonable assumption of progress modifies our big-picture view of cosmology and the predicted history and future of the universe. A compact physical theory of our universe (or multiverse), when run forward on a sufficient Universal Turing Machine, will lead not to one single universe/multiverse, but an entire ensemble of such multi-verses embedded within each other in something like a hierarchy of Matryoshka dolls.
The number of possible levels of embedding and the branching factor at each step can be derived from physics itself, and although such derivations are preliminary and necessarily involve some significant unknowns (mainly related to the final physical limits of computation), suffice to say that we have sufficient evidence to believe that the branching factor is absolutely massive, and many levels of simulation embedding are possible.
Some seem to have an intrinsic bias against the idea bases solely on its strangeness.
Another common mistake stems from the anthropomorphic bias: people tend to image the simulators as future versions of themselves.
The space of potential future minds is vast, and it is a failure of imagination on our part to assume that our descendants will be similar to us in details, especially when we have specific reasons to conclude that they will be vastly more complex.
Asking whether future intelligences will run simulations for entertainment or other purposes are not the right questions, not even the right mode of thought. They may, they may not, it is difficult to predict future goal systems. But those aren’t important questions anyway, as all universe intelligences will ‘run’ simulations, simply because that precisely is the core nature of intelligence itself. As intelligence expands exponentially into the future, the simulations expand in quantity and fidelity.
The Assemble of Multiverses
Some critics of the SA rationalize their way out by advancing a position of ignorance concerning the set of possible external universes our simulation may be embedded within. The reasoning then concludes that since this set is essentially unknown, infinite and uniformly distributed, that the SA as such thus tells us nothing. These assumptions do not hold water.
Imagine our physical universe, and its minimal program encoding, as a point in a higher multi-dimensional space. The entire aim of physics in a sense is related to AIXI itself: through physics we are searching for the simplest program that can consistently explain our observable universe. As noted earlier, the SA then falls out naturally, because it appears that any universe of our type when ran forward necessarily leads to a vast fractal hierarchy of embedded simulated universes.
At the apex is the base level of reality and all the other simulated universes below it correspond to slightly different points in the space of all potential universes - as they are all slight approximations of the original. But would other points in the space of universe-generating programs also generate observed universes like our own?
We know that the fundamental constants in the current physics are apparently well-tuned for life, thus our physics is a lone point in the topological space supporting complex life: even just tiny displacements in any direction result in lifeless universes. The topological space around our physics is thus sparse for life/complexity/extropy. There may be other topological hotspots, and if you go far enough in some direction you will necessarily find other universes in Tegmark’s Ultimate Ensemble that support life. However, AIXI tells us that intelligences in those universes will simulate universes similar to their own, and thus nothing like our universe.
On the other hand we can expect our universe to be slightly different from its parent due to the constraints of simulation, and we may even eventually be able to discover evidence of the approximation itself. There are some tentative hints from the long-standing failure to find a GUT of physics, and perhaps in the future we may find our universe is an ad-hoc approximation of a simpler (but more computationally expensive) GUT theory in the parent universe.
Alien Dreams
Our Milky Way galaxy is vast and old, consisting of hundreds of billions of stars, some of which are more than 13 billion years old, more than three times older than our sun. We have direct evidence of technological civilization developing in 4 billion years from simple protozoans, but it is difficult to generalize past this single example. However, we do now have mounting evidence that planets are common, the biological precursors to life are probably common, simple life may even have had a historical presence on mars, and all signs are mounting to support the principle of mediocrity: that our solar system is not a precious gem, but is in fact a typical random sample.
If the evidence for the mediocrity principle continues to mount, it provides a further strong support for the Simulation Argument. If we are not the first technological civilization to have arisen, then technological civilization arose and achieved Singularity long ago, and we are thus astronomically more likely to be in an alien rather than posthuman simulation.
What does this change?
The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent). If we discover that we are unlikely to be the first technological Singularity, we should assume AHS and AFS dominate. For reasons beyond this scope, I imagine that the AFS set will outnumber the AHS set.
Historical simulations would aim for historical fidelity, but future simulations would aim for fidelity to a 'what-if' scenario, considering some hypothetical action the alien simulating civilization could take. In this scenario, the first civilization to reach technological Singularity in the galaxy would spread out, gather knowledge about the entire galaxy, and create a massive number of simulations. It would use these in the same way that all universal intelligences do: to consider the future implications of potential actions.
What kinds of actions?
The first-born civilization would presumably encounter many planets that already harbor life in various stages, along with planets that could potentially harbor life. It would use forward simulations to predict the final outcome of future civilizations developing on these worlds. It would then rate them according to some ethical/utilitarian theory (we don't even need to speculate on the criteria), and it would consider and evaluate potential interventions to change the future historical trajectory of that world: removing undesirable future civilizations, pushing other worlds towards desirable future outcomes, and so on.
At the moment its hard to assign apriori weighting to future vs historical simulation possibilities, but the apparent age of the galaxy compared to the relative youth of our sun is a tentative hint that we live in a future simulation, and thus that our history has potentially been altered.