Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Let's make a deal

-18 Mitchell_Porter 23 September 2010 12:59AM

At the start of 2010, I resolved to focus as much as possible on singularity-relevant issues. That resolution has produced three ongoing projects:

continue reading »

Dreams of AIXI

-1 jacob_cannell 30 August 2010 10:15PM

Implications of the Theory of Universal Intelligence

If you hold the AIXI theory for universal intelligence to be correct; that it is a useful model for general intelligence at the quantitative limits, then you should take the Simulation Argument seriously.


AIXI shows us the structure of universal intelligence as computation approaches infinity.  Imagine that we had an infinite or near-infinite Turing Machine.  There then exists a relatively simple 'brute force' optimal algorithm for universal intelligence. 


Armed with such massive computation, we could just take all of our current observational data and then use a particular weighted search through the subspace of all possible programs that correctly predict this sequence (in this case all the data we have accumulated to date about our small observable slice of the universe).  AIXI in raw form is not computable (because of the halting problem), but the slightly modified time limited version is, and this is still universal and optimal.


The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.

AIXI’s mechanics, based on Solomonoff Induction, bias against complex programs with an exponential falloff ( 2^-l(p) ), a mechanism similar to the principle of Occam’s Razor.  The bias against longer (and thus more complex) programs, lends a strong support to the goal of String Theorists, who are attempting to find a simple, shorter program that can unify all current physical theories into a single compact description of our universe.  We must note that to date, efforts towards this admirable (and well-justified) goal have not born fruit.  We may actually find that the simplest algorithm that explains our universe is more ad-hoc and complex than we would desire it to be.  But leaving that aside, imagine that there is some relatively simple program that concisely explains our universe.

If we look at the history of the universe to date, from the Big Bang to our current moment in time, there appears to be a clear local telic evolutionary arrow towards greater X, where X is sometimes described as or associated with: extropy, complexity, life, intelligence, computation, etc etc.  Its also fairly clear that X (however quantified) is an exponential function of time.  Moore’s Law is a specific example of this greater pattern.


This leads to a reasonable inductive assumption, let us call it the reasonable assumption of progress: local extropy will continue to increase exponentially for the foreseeable future, and thus so will intelligence and computation (both physical computational resources and algorithmic efficiency). The reasonable assumption of progress appears to be a universal trend, a fundamental emergent property of our physics.


Simulations

If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.


As our future descendants expand in computational resources and intelligence, they will approach the limits of universal intelligence.  AIXI says that any such powerful universal intelligence, no matter what its goals or motivations, will create many simulations which effectively are pocket universes.  


The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes - complete with all the entailed trappings such as conscious simulated entities.


The reasonable assumption of progress modifies our big-picture view of cosmology and the predicted history and future of the universe.  A compact physical theory of our universe (or multiverse), when run forward on a sufficient Universal Turing Machine, will lead not to one single universe/multiverse, but an entire ensemble of such multi-verses embedded within each other in something like a hierarchy of Matryoshka dolls.

The number of possible levels of embedding and the branching factor at each step can be derived from physics itself, and although such derivations are preliminary and necessarily involve some significant unknowns (mainly related to the final physical limits of computation), suffice to say that we have sufficient evidence to believe that the branching factor is absolutely massive, and many levels of simulation embedding are possible.

Some seem to have an intrinsic bias against the idea bases solely on its strangeness.

Another common mistake stems from the anthropomorphic bias: people tend to image the simulators as future versions of themselves.

The space of potential future minds is vast, and it is a failure of imagination on our part to assume that our descendants will be similar to us in details, especially when we have specific reasons to conclude that they will be vastly more complex.

Asking whether future intelligences will run simulations for entertainment or other purposes are not the right questions, not even the right mode of thought.  They may, they may not, it is difficult to predict future goal systems.  But those aren’t important questions anyway, as all universe intelligences will ‘run’ simulations, simply because that precisely is the core nature of intelligence itself.  As intelligence expands exponentially into the future, the simulations expand in quantity and fidelity.


The Assemble of Multiverses


Some critics of the SA rationalize their way out by advancing a position of ignorance concerning the set of possible external universes our simulation may be embedded within.  The reasoning then concludes that since this set is essentially unknown, infinite and uniformly distributed, that the SA as such thus tells us nothing. These assumptions do not hold water.

Imagine our physical universe, and its minimal program encoding, as a point in a higher multi-dimensional space.  The entire aim of physics in a sense is related to AIXI itself: through physics we are searching for the simplest program that can consistently explain our observable universe.  As noted earlier, the SA then falls out naturally, because it appears that any universe of our type when ran forward necessarily leads to a vast fractal hierarchy of embedded simulated universes.

At the apex is the base level of reality and all the other simulated universes below it correspond to slightly different points in the space of all potential universes - as they are all slight approximations of the original.  But would other points in the space of universe-generating programs also generate observed universes like our own?

We know that the fundamental constants in the current physics are apparently well-tuned for life, thus our physics is a lone point in the topological space supporting complex life: even just tiny displacements in any direction result in lifeless universes.  The topological space around our physics is thus sparse for life/complexity/extropy.  There may be other topological hotspots, and if you go far enough in some direction you will necessarily find other universes in Tegmark’s Ultimate Ensemble that support life.  However, AIXI tells us that intelligences in those universes will simulate universes similar to their own, and thus nothing like our universe.

On the other hand we can expect our universe to be slightly different from its parent due to the constraints of simulation, and we may even eventually be able to discover evidence of the approximation itself.  There are some tentative hints from the long-standing failure to find a GUT of physics, and perhaps in the future we may find our universe is an ad-hoc approximation of a simpler (but more computationally expensive) GUT theory in the parent universe.


Alien Dreams

Our   Milky Way galaxy   is vast and old, consisting of hundreds of billions of stars, some of which are more than 13 billion years old, more than three times older than our sun.  We have direct evidence of technological civilization developing in 4 billion years from simple protozoans, but it is difficult to generalize past this single example.  However, we do now have mounting evidence that planets are common, the biological precursors to life are probably common, simple life may even have had a historical presence on mars, and all signs are mounting to support the  principle of mediocrity:  that our solar system is not a precious gem, but is in fact a typical random sample.

If the evidence for the mediocrity principle continues to mount, it provides a further strong support for the Simulation Argument.  If we are not the first technological civilization to have arisen, then technological civilization arose and achieved Singularity long ago, and we are thus astronomically more likely to be in an alien rather than posthuman simulation.

What does this change?

The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent).  If we discover that we are unlikely to be the first technological Singularity, we should assume AHS and AFS dominate.  For reasons beyond this scope, I imagine that the AFS set will outnumber the AHS set.

Historical simulations would aim for historical fidelity, but future simulations would aim for fidelity to a 'what-if' scenario, considering some hypothetical action the alien simulating civilization could take.  In this scenario, the first civilization to reach technological Singularity in the galaxy would spread out, gather knowledge about the entire galaxy, and create a massive number of simulations.  It would use these in the same way that all universal intelligences do: to consider the future implications of potential actions.

What kinds of actions?  

The first-born civilization would presumably encounter many planets that already harbor life in various stages, along with planets that could potentially harbor life.  It would use forward simulations to predict the final outcome of future civilizations developing on these worlds.  It would then rate them according to some ethical/utilitarian theory (we don't even need to speculate on the criteria), and it would consider and evaluate potential interventions to change the future historical trajectory of that world: removing undesirable future civilizations, pushing other worlds towards desirable future outcomes, and so on.

At the moment its hard to assign apriori weighting to future vs historical simulation possibilities, but the apparent age of the galaxy compared to the relative youth of our sun is a tentative hint that we live in a future simulation, and thus that our history has potentially been altered.

 

Positioning oneself to make a difference

5 Mitchell_Porter 18 August 2010 11:54PM

Last weekend, while this year's Singularity Summit took place in San Francisco, I was turning 40 in my Australian obscurity. 40 is old enough to be thinking that I should just pick a SENS research theme and work on it, and also move to wherever in the world is most likely to have the best future biomedicine (that might be Boston). But at least since the late 1990s, when Eliezer first showed up, I have perceived that superintelligence trumps life extension as a futurist issue. And since 2006, when I first grasped how something like CEV could be an answer to the problem of superintelligence, I've had it before me as a model of how the future could and should play out. I have "contrarian" ideas about how consciousness works, but they do not contradict any of the essential notions of seed AI and friendly AI; they only imply that those notions would need to be adjusted and fitted to the true ontology, whatever that may be.

So I think this is what I should be working on - not just the ontological subproblem, but all aspects of the problem. The question is, how to go about this. At the moment, I'm working on a lengthy statement of how I think a Friendly Singularity could be achieved - a much better version of my top-level posts here, along with new material. But the main "methodological" problem is economic and perhaps social - what can I live on while I do this, and where in the world and in society should I situate myself for maximum insight and productivity. That's really what this post is about.

The obvious answer is, apply to SIAI. I'm not averse to the idea, and on occasion I raise the possibility with them, but I have two reasons for hesitation.

The first is the problem of consciousness. I often talk about this in terms of vaguely specified ideas about quantum entanglement in the brain, but the really important part is the radical disjunction between the physical ontology of the natural sciences and the manifest nature of consciousness. I cannot emphasize enough that this is a huge gaping hole in the scientific understanding of the world, the equal of any gap in the scientific worldview that came before it, and that the standard "scientific" way of thinking about it is a form of property dualism, even if people won't admit this to themselves. All the quantum stuff you hear from me is just an idea about how to restore a type of monism. I actually think it's a conservative solution to a very big problem, but to believe that you would have to agree with me that the other solutions on offer can't work (as well as understanding just what it is that I propose instead).

This "reason for not applying to SIAI" leads to two sub-reasons. First, I'm not sure that the SIAI intellectual environment can accommodate my approach. Second, the problem with consciousness is of course not specific to SIAI, it is a symptom of the overall scientific zeitgeist, and maybe I should be working there, in the field of consciousness studies. If expert opinion changes, SIAI will surely notice, and so I should be trying to convince the neuroscientists, not the Friendly AI researchers.

The second top-level reason for hesitation is simply that SIAI doesn't have much money. If I can accomplish part of the shared agenda while supported by other means, that would be better. Mostly I think in terms of doing a PhD. A few years back I almost started one with Ben Goertzel as co-supervisor, which would have looked at implementing a CEV-like process in a toy physical model, but that fell through at my end. Lately I'm looking around again. In Australia we have David Chalmers and Marcus Hutter. I know Chalmers from my quantum-mind days in Arizona ten years ago, and I met with Hutter recently. The strong interdisciplinarity of my real agenda makes it difficult to see where I could work directly on the central task, but also implies that there are many fields (cognitive neuroscience, decision theory, various quantum topics) where I might be able to limp along with partial support from an institution.

So that's the situation. Are there any other ideas? (Private communications can go to mporter at gmail.)

A Less Wrong singularity article?

28 Kaj_Sotala 17 November 2009 02:15PM

Robin criticizes Eliezer for not having written up his arguments about the Singularity in a standard style and submitted them for publication. Others, too, make the same complaint: the arguments involved are covered over such a huge mountain of posts that it's impossible for most outsiders to seriously evaluate them. This is a problem for both those who'd want to critique the concept, and for those who tentatively agree and would want to learn more about it.

Since it appears (do correct me if I'm wrong!) that Eliezer doesn't currently consider it worth the time and effort to do this, why not enlist the LW community in summarizing his arguments the best we can and submit them somewhere once we're done? Minds and Machines will be having a special issue on transhumanism, cognitive enhancement and AI, with a deadline for submission in January; that seems like a good opportunity for the paper. Their call for papers is asking for submissions that are around 4000 to 12 000 words.

The paper should probably

  • Briefly mention some of the previous work about AI being near enough to be worth consideration (Kurzweil, maybe Bostrom's paper on the subject, etc.), but not dwell on it; this is a paper on the consequences of AI.
  • Devote maybe little less than half of its actual content to the issue of FOOM, providing arguments and references for building the case of a hard takeoff.
  • Devote the second half to discussing the question of FAI, with references to e.g. Joshua Greene's thesis and other relevant sources for establishing this argument. Carl Shulman says SIAI is already working on a separate paper on this, so it'd be better for us to concentrate merely on the FOOM aspect.
  • Build on the content of Eliezer's various posts, taking their primary arguments and making them stronger by reference to various peer-reviewed work.
  • Include as authors everyone who made major contributions to it and wants to be mentioned; certainly make (again, assuming he doesn't object) Eliezer as the lead author, since this is his work we're seeking to convert into more accessible form.

I have created a wiki page for the draft version of the paper. Anyone's free to edit.

How to get that Friendly Singularity: a minority view

12 Mitchell_Porter 10 October 2009 10:56AM

Note: I know this is a rationality site, not a Singularity Studies site. But the Singularity issue is ever in the background here, and the local focus on decision theory fits right into the larger scheme - see below.

There is a worldview which I have put together over the years, which is basically my approximation to Eliezer's master plan. It's not an attempt to reconstruct every last detail of Eliezer's actual strategy for achieving a Friendly Singularity, though I think it must have considerable resemblance to the real thing. It might be best regarded as Eliezer-inspired, or as "what my Inner Eliezer thinks". What I propose to do is to outline this quasi-mythical orthodoxy, this tenuous implicit consensus (tenuous consensus because there is in fact a great diversity of views in the world of thought about the Singularity, but implicit consensus because no-one else has a plan), and then state how I think it should be amended. The amended plan is the "minority view" promised in my title.

continue reading »

View more: Prev