Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Dreams of AIXI

-1 Post author: jacob_cannell 30 August 2010 10:15PM

Implications of the Theory of Universal Intelligence

If you hold the AIXI theory for universal intelligence to be correct; that it is a useful model for general intelligence at the quantitative limits, then you should take the Simulation Argument seriously.


AIXI shows us the structure of universal intelligence as computation approaches infinity.  Imagine that we had an infinite or near-infinite Turing Machine.  There then exists a relatively simple 'brute force' optimal algorithm for universal intelligence. 


Armed with such massive computation, we could just take all of our current observational data and then use a particular weighted search through the subspace of all possible programs that correctly predict this sequence (in this case all the data we have accumulated to date about our small observable slice of the universe).  AIXI in raw form is not computable (because of the halting problem), but the slightly modified time limited version is, and this is still universal and optimal.


The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.

AIXI’s mechanics, based on Solomonoff Induction, bias against complex programs with an exponential falloff ( 2^-l(p) ), a mechanism similar to the principle of Occam’s Razor.  The bias against longer (and thus more complex) programs, lends a strong support to the goal of String Theorists, who are attempting to find a simple, shorter program that can unify all current physical theories into a single compact description of our universe.  We must note that to date, efforts towards this admirable (and well-justified) goal have not born fruit.  We may actually find that the simplest algorithm that explains our universe is more ad-hoc and complex than we would desire it to be.  But leaving that aside, imagine that there is some relatively simple program that concisely explains our universe.

If we look at the history of the universe to date, from the Big Bang to our current moment in time, there appears to be a clear local telic evolutionary arrow towards greater X, where X is sometimes described as or associated with: extropy, complexity, life, intelligence, computation, etc etc.  Its also fairly clear that X (however quantified) is an exponential function of time.  Moore’s Law is a specific example of this greater pattern.


This leads to a reasonable inductive assumption, let us call it the reasonable assumption of progress: local extropy will continue to increase exponentially for the foreseeable future, and thus so will intelligence and computation (both physical computational resources and algorithmic efficiency). The reasonable assumption of progress appears to be a universal trend, a fundamental emergent property of our physics.


Simulations

If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.


As our future descendants expand in computational resources and intelligence, they will approach the limits of universal intelligence.  AIXI says that any such powerful universal intelligence, no matter what its goals or motivations, will create many simulations which effectively are pocket universes.  


The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes - complete with all the entailed trappings such as conscious simulated entities.


The reasonable assumption of progress modifies our big-picture view of cosmology and the predicted history and future of the universe.  A compact physical theory of our universe (or multiverse), when run forward on a sufficient Universal Turing Machine, will lead not to one single universe/multiverse, but an entire ensemble of such multi-verses embedded within each other in something like a hierarchy of Matryoshka dolls.

The number of possible levels of embedding and the branching factor at each step can be derived from physics itself, and although such derivations are preliminary and necessarily involve some significant unknowns (mainly related to the final physical limits of computation), suffice to say that we have sufficient evidence to believe that the branching factor is absolutely massive, and many levels of simulation embedding are possible.

Some seem to have an intrinsic bias against the idea bases solely on its strangeness.

Another common mistake stems from the anthropomorphic bias: people tend to image the simulators as future versions of themselves.

The space of potential future minds is vast, and it is a failure of imagination on our part to assume that our descendants will be similar to us in details, especially when we have specific reasons to conclude that they will be vastly more complex.

Asking whether future intelligences will run simulations for entertainment or other purposes are not the right questions, not even the right mode of thought.  They may, they may not, it is difficult to predict future goal systems.  But those aren’t important questions anyway, as all universe intelligences will ‘run’ simulations, simply because that precisely is the core nature of intelligence itself.  As intelligence expands exponentially into the future, the simulations expand in quantity and fidelity.


The Assemble of Multiverses


Some critics of the SA rationalize their way out by advancing a position of ignorance concerning the set of possible external universes our simulation may be embedded within.  The reasoning then concludes that since this set is essentially unknown, infinite and uniformly distributed, that the SA as such thus tells us nothing. These assumptions do not hold water.

Imagine our physical universe, and its minimal program encoding, as a point in a higher multi-dimensional space.  The entire aim of physics in a sense is related to AIXI itself: through physics we are searching for the simplest program that can consistently explain our observable universe.  As noted earlier, the SA then falls out naturally, because it appears that any universe of our type when ran forward necessarily leads to a vast fractal hierarchy of embedded simulated universes.

At the apex is the base level of reality and all the other simulated universes below it correspond to slightly different points in the space of all potential universes - as they are all slight approximations of the original.  But would other points in the space of universe-generating programs also generate observed universes like our own?

We know that the fundamental constants in the current physics are apparently well-tuned for life, thus our physics is a lone point in the topological space supporting complex life: even just tiny displacements in any direction result in lifeless universes.  The topological space around our physics is thus sparse for life/complexity/extropy.  There may be other topological hotspots, and if you go far enough in some direction you will necessarily find other universes in Tegmark’s Ultimate Ensemble that support life.  However, AIXI tells us that intelligences in those universes will simulate universes similar to their own, and thus nothing like our universe.

On the other hand we can expect our universe to be slightly different from its parent due to the constraints of simulation, and we may even eventually be able to discover evidence of the approximation itself.  There are some tentative hints from the long-standing failure to find a GUT of physics, and perhaps in the future we may find our universe is an ad-hoc approximation of a simpler (but more computationally expensive) GUT theory in the parent universe.


Alien Dreams

Our   Milky Way galaxy   is vast and old, consisting of hundreds of billions of stars, some of which are more than 13 billion years old, more than three times older than our sun.  We have direct evidence of technological civilization developing in 4 billion years from simple protozoans, but it is difficult to generalize past this single example.  However, we do now have mounting evidence that planets are common, the biological precursors to life are probably common, simple life may even have had a historical presence on mars, and all signs are mounting to support the  principle of mediocrity:  that our solar system is not a precious gem, but is in fact a typical random sample.

If the evidence for the mediocrity principle continues to mount, it provides a further strong support for the Simulation Argument.  If we are not the first technological civilization to have arisen, then technological civilization arose and achieved Singularity long ago, and we are thus astronomically more likely to be in an alien rather than posthuman simulation.

What does this change?

The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent).  If we discover that we are unlikely to be the first technological Singularity, we should assume AHS and AFS dominate.  For reasons beyond this scope, I imagine that the AFS set will outnumber the AHS set.

Historical simulations would aim for historical fidelity, but future simulations would aim for fidelity to a 'what-if' scenario, considering some hypothetical action the alien simulating civilization could take.  In this scenario, the first civilization to reach technological Singularity in the galaxy would spread out, gather knowledge about the entire galaxy, and create a massive number of simulations.  It would use these in the same way that all universal intelligences do: to consider the future implications of potential actions.

What kinds of actions?  

The first-born civilization would presumably encounter many planets that already harbor life in various stages, along with planets that could potentially harbor life.  It would use forward simulations to predict the final outcome of future civilizations developing on these worlds.  It would then rate them according to some ethical/utilitarian theory (we don't even need to speculate on the criteria), and it would consider and evaluate potential interventions to change the future historical trajectory of that world: removing undesirable future civilizations, pushing other worlds towards desirable future outcomes, and so on.

At the moment its hard to assign apriori weighting to future vs historical simulation possibilities, but the apparent age of the galaxy compared to the relative youth of our sun is a tentative hint that we live in a future simulation, and thus that our history has potentially been altered.

 

Comments (145)

Comment author: Mitchell_Porter 31 August 2010 09:55:29AM 14 points [-]

I feel much the same about this post as I did about Roko's Final Post. It's imaginative, it's original, it has an internal logic that manages to range from metaphysics to cosmology; it's good to have some crazy-bold big-picture thinking like this in the public domain; but it's still wrong, wrong, wrong. It's an artefact of its time rather than a glimpse of reality. The reason it's nonetheless interesting is that it's an attempt to grasp aspects of reality which are not yet understood in its time - and this is also why I can't prove it to be "wrong" in a deductive way. Instead, I can only oppose my postulates to the author's, and argue that mine make more sense.

First I want to give a historical example of human minds probing the implications of things new and unknown, which in a later time became familiar and known. The realization that the other planets were worlds like Earth, a realization we might date from Galileo forwards, opened the human imagination to the idea of other worlds in the sky. People began to ask themselves: what's on those other worlds, is there life, what's it like; what's the big picture, the logic of the situation. In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset of earlier centuries. Earthbound minds, knowing only the one planet, and seeing it to be inhabited, naturally thought of other worlds as inhabited too. Even before 20th-century science fiction, there was an obscure literature of speculation about the alien humanities living on the other planets, how their character might reflect their circumstance, and so forth. It may all seem strange, arbitrary, and even childish now, but it was a way of thinking which was natural to its time.

So, what is the aspect of reality, not yet understood in its time, which makes this article possible, in the same way that the knowledge that there were other worlds, nearby in the sky, made it possible to speculate about life on those worlds? There's obviously a bit of metaphysics at work in this essay, regarding the relationship between simulation and reality, metaphysics which is very zeitgeisty and not yet understood, and it's where I will focus my criticism subsequently.

But I would say that the shocking knowledge specific to our own time, that supplied the canvas on which a cosmology like this can be painted, is the realization that the matter of the universe could be used technologically, on a cosmic scale. I remember the shock of reading Stross's Accelerando and realizing that the planet Mercury really could be dismantled and turned into a cloud of computational elements. The abstract idea of astronomical bodies being turned into giant computers had been known to me for twenty years, but it was still shocking to realize viscerally that it was already manifestly a material possibility, right here in the reality where I live.

Stross's Mercury gets turned into a cloud of nanocomputers, and it might be argued that this is still vaporware, with many fundamental problems to be solved before it can confidently be said to be possible; but just think of Mercury being turned into a quadrillion Athlon processors, then, orbiting the sun. That would require a titanic industrial enterprise on the dark side of Mercury, and many engineering problems would have to be solved; but we do already know how to mine, how to fabricate chips, how to travel through space. This modified version of Stross's scenario serves as my proof of concept for the idea of dismantling a planet and turning it into a computer (or a network of computers).

So, to repeat, the shocking discovery is the possibility of megascale (astronomical) engineering, with the construction of megascale computers and computer networks on a trans-solar scale being especially interesting and challenging. It appears to be materially possible for whole solar systems to be turned into computing devices, which could then communicate across interstellar distances and operate for geological periods of time. It's the further idea that this is the destiny of the universe - the galaxies to be turned into giant Internets - which provides the canvas for cosmo-computational speculation such as we see above.

Various reactions to this possibility exist. Some people embrace it because they have experienced the freedom and power of computation in the present, and they think that a whole universe turned to organized computation implies so much freedom and power that it transcends any previous concept of utopia. Some people will reject it as insanity - they just can't believe that anything like that could be possible. Some people will offer a more grudging, lukewarm rejection - sure it's possible, but do you really think we should do that; do you really think a wise, superior alien race would want to eat the universe; in their wisdom, wouldn't they know that growth isn't so great - etc. I don't believe the argument that technological civilizations will avoid doing this as a rule, out of a wise embrace of limits; but the idea of a universe transformed into "computronium", and especially the idea that any sufficiently advanced civilization will obviously do this, has a manic uniformity about it which makes me suspicious. However, I cannot deny that the vision of robot fleets traveling the galaxy and making Dyson spheres does appear to be a material and technological possibility.

So much for the analysis of where we stand intellectually - what we know, what we don't know, what we are now able to see as possible but do not know to be actual, likely, or necessary. What do I think of this particular vision of how all that computation will be used? I'm going to start with one of my competing postulates, which provides me with a major reason why I think Jacob's reasoning is radically wrong. Unfortunately it's a postulate which is not just at odds with his thinking, but with much of the thinking on this site; so be it. It simply is the postulate that simulation does not create things. Simulations of consciousness do not create consciousness, simulations of universes do not create subjectively inhabited universes. Using Jacob's terminology, the postulate is that consciousness is strictly a phenomenon occurring at the "base level of reality". You could have a brain in a vat wired up to a simulation within a simulation, in which case it might be experiencing events at two removes from the physical base; but there won't be any experience happening, there won't be anyone home, unless you have the right sort of physical process happening. Abstract computation is not enough.

OK, that's my main reason for dissenting from this argument, but that's definitely a minority opinion here. However, I can offer a few other considerations which affect its plausibility. Jacob writes:

Imagine that we had an infinite or near-infinite Turing Machine.

But we don't, nor does anyone living in a universe with physics like this one. There is a cosmological horizon which bounds the amount of bits available, there is a cosmological evolution which bounds the amount of time available. Just enumerating all programs of length n requires memory resources exponential in n; actually executing them in turn, according to the AIXI algorithm, will be even more computationally intensive. The number of operations which can be executed in our future light-cone is actually not that big, when we start looking at such exponentials of exponentials. This sort of universe isn't even big enough to simulate all possible stars.

The implication seems to be that if our existence has been coughed up by an AIXI-like brute-force simulation occurring in a universe whose base-level physics is like the physics we see (let's ignore for the moment my skepticism about functionalism), we can't be living in a simulation of base-level physics - certainly not a base-level simulation of a whole universe. That is way too big a program to ever be encountered in a brute-force search of program space, occurring in a universe this small. We must be living in some sketchy, partial, approximate virtual reality, big enough to create these appearances and not much else.

If we suppose that the true base-level physics of the ultimate reality might be quite different to that in our simulated universe, then this counterargument doesn't work - but in that case, we are no longer talking about "ancestor simulations", we are just talking about brute-force calculations occurring in a possible universe of completely unknown physics. In fact, although Jacob proposes that a universe like ours, run forward, should produce simulations of itself, the argument here leads in the other direction: whatever the base-level physics of reality, it isn't the physics of the standard model and the big-bang cosmology, because that universe isn't big enough to generically produce such simulations.

I will repeat my contention that I don't believe in functionalism/simulationism anyway, but even if one adopts that premise, there needs to be a lot more thought about the sorts of universes one thinks exist in the multiverse, and about the "demographics" of the computations occurring in them. This argument from AIXI would be neat if it worked, because AIXI's optimality suggests it should be showing up everywhere that Vast computation occurs, and its universality suggests that the same pocket universes should be showing up wherever it runs on a Vast scale. But the conditions for Vast enough computation are not automatically realized, not even in a universe like the one that real-world physics postulates; so one would need to ask oneself, what sort of possible worlds do contains sufficiently Vast computation, and how common in the multiverse are they, and how often will their Vast resources actually get used in a brute-force way.

Comment author: jacob_cannell 31 August 2010 09:10:06PM *  1 point [-]

I feel much the same about this post as I did about Roko's Final Post.

So from searching around, it looks like Roko was cosmically censored or something on this site. I don't know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation - but again I wasn't here so I don't know much of anything about Roko or his posts.

In the present day, when robot probes have been to most of the planets and we know them as beautiful but uninhabited landscapes, it may be hard to enter into the mindset of earlier centuries. Earthbound minds, knowing only the one planet, and seeing it to be inhabited, naturally thought of other worlds as inhabited too

  1. we have sent robot probes to only a handful of locations in our solar system, a far cry from "most of the planets" unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars - it may have had simple life on the past. We don't have enough observational data yet. Also, there may be life on europa or titan. I'm not holding my breath, but its worth mentioning.

  2. Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity - the evidence for the latter is quite strong.

Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution.

We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average.

So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, we see life arising on 1 body out of a few dozen, with the possibility of that being 2 or 3 out of a few dozen (mars, europa, titan still have some small probability).

But I would say that the shocking knowledge specific to our own time, that supplied the canvas on which a cosmology like this can be painted, is the realization that the matter of the universe could be used technologically, on a cosmic scale.

Actually no, I do not find the cosmic scale computer scenarios of Stross, Moravec et al to be realistic. Actually I find them to be about as realistic as our descendants dismantling the universe to build Babbage's Difference Engines or giant steam clocks. But that analogy isn't very telling.

If you look at what physics tells you about the fundamentals of computation, you can derive surprisingly powerful invariant predictions about future evolution with knowledge of just a few simple principles:

  1. maximum data storage capacity is proportional to mass
  2. maximum computational throughput is proportional to energy. With quantum computing, this also scales (for probabilistic algorithms) exponentially with the mass: vaguely O(e*2^m). This is of course, insane, but apparently a fact of nature (if quantum computing actually works).
  3. maximum efficiency (in multiple senses: algorithmic, intelligence - ability to make effective use of data, transmission overhead) is inversely proportional to size (radius, volume) - this is a direct consequence of the speed of light

So armed with this knowledge, you can determine apriori that future computational hyperintelligences are highly unlikely to ever get to planetary size. They will be small, possibly even collapsing into singularities or exotic matter in final form. They will necessarily have to get smaller to become more efficient and more intelligent. This isn't something one has a choice about: big is slow and dumb, small is fast and smart.

Very roughly, I expect that a full-blown runaway Singularity on earth may end up capturing a big chunk of the available solar energy (although perhaps less than the biosphere captures, as fusion or more exotic potentials exist), but would only ever end up needing a small fraction of earth's mass: probably less than humans currently use. And from thermodynamics, we know maximum efficiency is reached operating in the range of earth's ambient temperature, and that would be something of a speed constraint.

It simply is the postulate that simulation does not create things.

Make no mistake, it certainly does, and this just a matter of fact - unless one wants to argue definitions.

The computer you are using right now was created first in an approximate simulation in a mammalian cortex, which was later promoted to approximate simulations in computer models, until eventually it was simulated in a very detailed near molecular/quantum level simulation, and then emulated (perfect simulation) through numerous physical prototypes.

Literally everything around you was created through simulation in some form. You can't create anything without simulation - thought itself is a form of simulation.

Simulations of consciousness do not create consciousness, simulations of universes do not create subjectively inhabited universes.

If you are hard set against computationalism, its probably not worth my energy to get into it (I assumed it is a given), but just to show my perspective a little:

Simulations of consciousness will create consciousness when we succeed in creating AGI's that are as intelligent as humans and are objectively indistinguishable. At the moment we don't understand our own brain and mechanisms of intelligence in enough detail to simulate them, and we don't yet have enough computational power to discover those mechanisms through brute evolutionary search. But that will change pretty soon.

Keep in mind that your consciousness - the essence of your intelligence - is itself is a simulation, nothing more, nothing less.

Just enumerating all programs of length n requires memory resources exponential in n;

Not at all. It requires space of only N plus whatever each program uses for runtime. You are thinking of time resources - that does scale exponentially with N. But no hyperintelligence will use pure AIXI - they will use universal hierarchical approximations (mammalian cortex already does something like this) which have fantastically better scaling. But hold that thought, because your next line of argument brings us (indirectly) to an important agreement. .

actually executing them in turn, according to the AIXI algorithm, will be even more computationally intensive. The number of operations which can be executed in our future light-cone is actually not that big, when we start looking at such exponentials of exponentials. This sort of universe isn't even big enough to simulate all possible stars.

perfect optimal deterministic intelligence (absolute deterministic 100% future knowledge of everything) requires a computer with at least as much mass as the system you want to simulate, and it provides an exponential time brute force algorithm to find the ultimate minimal program to perfectly simulate said system. That program will essentially be the ultimate theory of physics. But you only need to find that program once, and then forever after that you can in theory simulate anything in linear time with a big enough quantum computer.

But you can only approach that ultimate, so if you want absolute 100% accurate knowledge about how a physical system will evolve, you need to make the physical system itself. We already know this and use this throughout engineering.

First we create things in approximate simulations inside our mammalian cortices, and we create and discard a vast number of potential ideas, the best of which we simulate in ever more detail in computers, until eventually we actually physically create them and test those samples.

I think this is very a strong further argument that future hyper-intelligences will not go around turning all of the universe into computronium. Not only would that be unnecessary and ineffecient, but it would destroy valuable information: they will want to preserve as much of the interesting stuff in the galaxy as possible.

But they will probably convert little chunks of dead matter here and there into hyperintelligences and use those to run countless approximate simulations (that is to say - hyperthought) of the interesting stuff they find. (such as worlds with life)

Comment author: ciphergoth 31 August 2010 09:11:48PM 4 points [-]

Roko wasn't censored, he deleted everything he'd ever posted. I've independently confirmed this via contact with him outside LW.

Comment author: wedrifid 01 September 2010 10:26:24AM 3 points [-]

Roko was censored and publicly abused in and about one post but he deleted everything else himself. (That would have taken hours of real time unless he created some sort of automaton. I tried just browsing through my posts for the last few months and it took ages!)

Comment author: timtyler 31 August 2010 09:24:35PM *  2 points [-]

Actually lots of people were censored - several of my comments were removed from the public record, for example - and others were totally deleted.

Comment author: ciphergoth 31 August 2010 09:32:24PM 2 points [-]

Hmm, I didn't ask whether he'd ever had a comment deleted; what I'm confident of is that the root-and-branch removal of all his work was his own doing.

Comment author: timtyler 31 August 2010 10:00:33PM 0 points [-]

That's what he says here.

Comment deleted 01 September 2010 06:21:11AM [-]
Comment author: FAWS 01 September 2010 06:58:57AM 1 point [-]

If you absolutely have to summarize the forbidden topic at least rot13 it and preface it with an appropriate warning.

Comment deleted 01 September 2010 07:30:45AM [-]
Comment author: FAWS 01 September 2010 08:11:00AM *  0 points [-]

Memetic hazard warning. Decode first part first.

ROT13: Ebxb ceboyrz. Qb abg ernq hayrff lbh nyernql xabj nobhg gur Ebxb ceboyrz. Vs lbh qb xabj nqqvgvbany qnatre vf rkgerzryl hayvxryl.

ROT13: Nal cbffvoyr nethzrag sbe jul gur qnatre vf erny vf nyfb fbzrguvat gung znxrf gur qnatre zber erny. Zbfg crbcyr nera'g va qnatre va gur svefg cynpr orpnhfr gur qnatre qbrfa'g znxr frafr gb gurz. Fhccylvat gurz jvgu nethzragf jbhyq or pbhagrecebqhpgvir orpnhfr rvgure gurl fgvyy qba'g haqrefgnaq naq lbh evfx gurz fcernqvat gur nethzragf (va gung ertneq lbh unir cebira lbhefrys gur irel ynfg crefba nalbar fubhyq fhccyl jvgu rivqrapr!) be gurl ner zbirq vagb n pynff bs crbcyr zber va qnatre.

Zbfg crbcyr gb jubz vg znxrf frafr fgvyy nera'g va qnatre orpnhfr gurl nera'g [prafberq1], gubhtu gurl zvtug fhssre cflpubybtvpny qnzntr. Naq hasbeghangryl [prafberq1] vf jung vg'f tbvat gb gnxr gb pbaivapr crbcyr yvxr lbh. [Boivbhf cebgrpgvba] cebonoyl jbexf, ohg vf vssl vs lbh bayl guvax bs vg va erfcbafr gb gur fpranevb. Gurer ner zber qhovbhf bgure cebgrpgvbaf, ohg vs [boivbhf] snvyf gurl cebonoyl qb nf jryy. Vg'f cerggl zhpu vzcbffvoyr gb fnl jurgure nalbar [prafberq1] jub snvyf [boivbhf] npghnyyl vf va erny qnatre, ohg cflpubybtvpny qnzntr frrzf cerggl zhpu hanibvqnoyr ng gung cbvag naq gurer vf ab tbbq ernfba gb gnxr gung evfx, gurer vf ab cbffvoyr tbbq bhgpbzr erfhygvat sebz vg!! Naq ab, nethzragf qvffbyivat gur ceboyrz nera'g yvxryl, zbfg bs gurz ner tbvat gb rvgure zvff gur cbvag be or jrnxre guna [boivbhf].

Comment author: Mitchell_Porter 01 September 2010 08:26:24AM 1 point [-]

Let's just remind ourselves of a few real things first. About 100,000 people were crushed to death or buried alive in Haiti a few months ago. The same thing has happened to millions of others throughout history. It will happen again; it could happen to me or to you if we're in the wrong place at the wrong time. This is something that is terrible and true. For a person with the bad luck to be in a specifically vulnerable state of mind, thinking about this could become an obsession that destroys their life. Are we going to avoid all discussion of the vulnerabilities of the human body for this reason - in order to avoid the psychological effects on a few individuals?

On this website we regularly talk about the possibility of the destruction of the world and the extinction of the human species. We even talk about scenarios straight out of the Hell of tradition religious superstition - being tortured forever by a superior being. I don't see any moves to censor direct discussion of such possibilities. But it is being proposed that we censor discussion of various arcane and outlandish scenarios which are supposed to make someone obsessed with those possibilities in an unhealthy way. This is not a consistent attitude.

Comment author: jhuffman 09 December 2011 10:12:24PM 1 point [-]

I'm suspicious that this entire [Forbidden Topic] is a (fairly deep) marketing ploy.

Comment author: FAWS 01 September 2010 08:57:42AM *  0 points [-]

Are we going to avoid all discussion of the vulnerabilities of the human body for this reason - in order to avoid the psychological effects on a few individuals?

Imagine this was an OCD self-help board and there was a special spot on the body fussing around with extendedly could cause excruciating pain for some people, and some OCs just couldn't resist fussing around with that spot after learning where it is.

Some members of the board dispute the existence of the spot and openly mention some very general information about it that has previously been leaked even when asked not to. They aren't going to be convinced by any arguments that don't include enough information to find the spot (which many members will then not be able to resist to pursue), and might not even then if they aren't among the people vulnerable, so they might spread knowledge of the spot.

The ones who know about the location think science currently has nothing more to learn from it and include at least one relevant expert. The chance of the spot causing any danger without knowledge about it is effectively zero.

Non-OCDs are unlikely to be in danger, but knowledge would lower the status of OCDs severly.

Comment author: Mitchell_Porter 01 September 2010 09:03:35AM 0 points [-]

If anyone actually thinks this is a problem for them, write to me and I will explain how [redacted] can make it go away.

Comment author: nick012000 05 September 2010 06:29:59AM 0 points [-]

Are you seriously suggesting he created some sort of basilisk hack or something? That seems rather dubious to me; what exactly was it that he came up with?

By the way, I doubt it'll seriously alter my belief structures; I already believe an eternity to torture in Hell is better than ceasing to exist (though of course an eternity of happiness is much better), so I could totally see a Friendly AI coming to the same conclusion.

Comment author: Mitchell_Porter 05 September 2010 08:25:42AM 6 points [-]

I already believe an eternity [of] torture in Hell is better than ceasing to exist

The idea that literally anything is better than dying is a piece of psychological falseness that I've run across before. Strange7 here says it, the blogger Hopefully Anonymous says it, no doubt thousands of people throughout history also said it. It's the will to live triumphing over the will to truth.

Any professional torturer could make you choose death fairly quickly. I'm thinking of how North Korea tortured the crew of the American spy ship USS Pueblo. As I recall, one elementary torture involved being struck across the face with a metal bar or maybe a heavy block of wood, hard enough to knock out teeth, and I remember reading of how one crewman shook with fear as their captors prepared to strike him again. I may have the details wrong but that's irrelevant. If you are forced to experience something really unbearable often enough, eventually you will want to die, just to make it stop.

Comment author: timtyler 05 September 2010 09:20:48AM *  -1 points [-]

The ones who know about the location think science currently has nothing more to learn from it and include at least one relevant expert.

Many others disagree with them.

The chance of the spot causing any danger without knowledge about it is effectively zero.

That seems inaccurate in this case - it seems to me a perfectly reasonable thing for people to discuss - in the course of trying to find ways of mitigating the problem.

Comment author: Desrtopa 05 February 2011 05:25:38AM -1 points [-]

I don't know the content of the basilisk (I've heard that it's not a useful thing to know, in addition to being potentially stress inducing, so I do not want to want to know it,) so I'm not in much of a position to critique its similarity to knowledge of events like the Haiti earthquake. But given that we don't have the capacity to shelter anyone from the knowledge of tragedy and human fragility, or eternal torment such as that proposed by religious traditions, failing to censor such concepts is not a sign of inconsistency.

Comment author: Mass_Driver 30 August 2010 11:18:35PM 5 points [-]

Sorry, what is AIXI? It was not clear to me from the linked abstract.

Comment author: jacob_cannell 30 August 2010 11:47:30PM 4 points [-]

Sorry, I should have linked to a quick overview of AIXI. Its basically an algorithm for ultimate universal intelligence, and a theorem showing the algorithm is optimal. It shows what a universal intelligence could be or should be like at the limits - given vast amounts of computation.

Comment author: Mass_Driver 31 August 2010 01:07:39AM 2 points [-]

Interesting.

(1) What do you mean by "intelligence?"

(2) Why would "actually running such an algorithm on an infinite Turing Machine...have the interesting side effect of actually creating all such universes?"

Comment author: jacob_cannell 31 August 2010 01:34:46AM 5 points [-]
  1. The AIXI algorithm amounts to a formal mathematical definition of intelligence, but in plain english we can just say intelligence is a capacity for modelling and predicting one's environment.

  2. This relates to the computability of physics and the materialist computationalist assumption in the SA itself. If we figure out the exact math underlying the universe (and our current theories are pretty close), and you ran that program on an infinite or near infinite computer, that system would be indistinguishable from the universe itself from the perspective of observers inside the simulation. Thus it would recreate the universe (albeit embedded in a parent universe). If you were to look inside that simulated universe, it would have entire galaxies, planets, humans or aliens pondering their consciousness, writing on websites, etc etc etc

Comment author: Perplexed 02 September 2010 04:06:32AM *  1 point [-]

... that system would be indistinguishable from the universe itself from the perspective of observers inside the simulation. Thus it would recreate the universe (albeit embedded in a parent universe).

I worry that there may be an instance of the Mind Projection Fallacy involved here. You are assuming there is a one-place predicate E(X) <=> {X has real existence}. But maybe the right way of thinking about it is as a two-place predicate J(A,X)<=> {Agent A judges that X has real existence}.

Example: In this formulation, Descartes's "cogito ergo sum" might best be expressed as leading to the conclusion J(me,me). Perhaps I can also become convinced of J(you,you) and perhaps even J(sim-being,sim_being). But getting from there to E(me) seems to be Mind Projection; getting to J(me, you) seems difficult; and getting to J(me, sim-being) seems very difficult. Especially if I can't also get to J(sim-being, me).

Comment author: Mass_Driver 02 September 2010 03:30:40AM 0 points [-]

Very coherent; thank you.

Do your claims really depend on the optimality of AIXI? It seems to me that, using your logic, if I ran the exact math underlying the universe on, say, Wolfram Alpha, or a TI-86 graphing calculator, the simulated inhabitants would still have realistic experiences; they would just have them more slowly relative to our current frame of reality's time-stream.

Comment author: jacob_cannell 02 September 2010 07:11:54AM 0 points [-]

No, computationalism is separate, and was more or less assumed. I discussed AIXI as interesting just because it shows that universal intelligence is in fact simulation, and so future hyperintelligences will create beings like us just by thinking/simulating our time period (in sufficient detail). And moreover, they won't have much of a choice (if they really want to deeply understand it).

As to your second thought, turing machines are turing machines, so it doesn't matter what form it takes as long as it has sufficient space and time. Of course, that rules out your examples though: you'll need something just a tad bigger than a TI-86 or Wolfram Alpha (on today's machines) to simulate anything on the scale of a planet, let alone a single human brain.

Comment author: Mass_Driver 02 September 2010 07:43:02AM 0 points [-]

I think I'm finally starting to understand your article. I will probably have to go back and vote it up; it's a worthwhile point.

computationalism is separate, and was more or less assumed

Do you have the link for that? I think there's an article somewhere, but I can't remember what it's called.

If there isn't one, why do you assume computationalism? I find it stunningly implausible that the mere specification of formal relationships among abstract concepts is sufficient to reify those concepts, i.e., to cause them to actually exist. For me, the very definitions of "concept," "relationship," and "exist" are almost enough to justify an assumption of anti-computationalism. A "concept" is something that might or might not exist; it is merely potential existence. A "relationship" is a set of concepts. I either don't know of or don't understand any of the insights that would suggest that everything that potentially exists and is computed therefore actually exists -- computing, to me, just sounds like a way of manipulating concepts, or, at best, of moving a few bits of matter around, perhaps LED switches or a turing tape, in accordance with a set of concepts. How could moving LED switches around make things real?

By "real," I mean made of "stuff." I get through a typical day and navigate my ordinary world by assuming that there is a distinction between "stuff" (matter-energy) and "ideas" (ways of arranging the matter-energy in space-time). Obviously thinking about an idea will tend to form some analog of the idea in the stuff that makes up my brain, and, if my brain were so thorough and precise as to resemble AIXI, the analog might be a very tight analog indeed, but it's still an analog, right? I mean, I don't take you to mean that an AIXI 'brain' would literally form a class-M planet inside its CPU so as to better understand the sentient beings on that planet. The AIXI brain would just be thinking about the ideas that govern the behavior of the sentient beings...and thinking about ideas, even very precisely, doesn't make the ideas real.

I might be missing something here; I'd appreciate it if you could point out the flaw(s) in my logic.

Comment author: khafra 02 September 2010 03:38:02PM 3 points [-]

Substrate independence, functionalism, even the generalized anti-zombie principle--all of these have been covered in some depth on Lesswrong before. Much of it is in the sequences, like nonperson predicates and some of the links from it.

If you don't believe an emulated mind can be conscious, do you believe that your mind is noncomputable or that meat has special computational properties?

Comment author: Mass_Driver 02 September 2010 04:54:27PM 1 point [-]

A highly detailed model of me, may not be me. But it will, at least, be a model which (for purposes of prediction via similarity) thinks itself to be Eliezer Yudkowsky. It will be a model that, when cranked to find my behavior if asked "Who are you and are you conscious?", says "I am Eliezer Yudkowsky and I seem have subjective experiences" for much the same reason I do.

I buy that. That sort of model could probably exist.

Your "zombie", in the philosophical usage of the term, is putatively a being that is exactly like you in every respect - identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion - except that your zombie is not conscious.

That sort of zombie can't possibly exist.

If you don't believe an emulated mind can be conscious, do you believe that your mind is noncomputable or that meat has special computational properties?

It's not that I don't believe an emulated mind can be conscious. Perhaps it could. What boggles my mind is the assertion that emulation is sufficient to make a mind conscious -- that there exists a particular bunch of equations and algorithms such htat when they are written on a piece of paper they are almost certainly non-conscious, but when they are run through a Turing machine they are almost certainly conscious.

I have no opinion about whether my mind is computable. It seems likely that a reasonably good model of my mind might be computable.

I'm not sure what to make of the proposition that meat has special computational properties. I wouldn't put it that way, especially since I don't like the connotation that brains are fundamentally physically different from rocks. My point isn't that brains are special; my point is that matter-energy is special. Existence, in the physical sense, doesn't seem to me to be a quality that can be specified in an equation or an algorithm. I can solve Maxwell's equations all day long and never create a photon from scratch.

That doesn't necessarily mean that photons have special computational properties; it just means that even fully computable objects don't come into being by virtue of their having been computed. I guess I don't believe in substrate independence?

Comment author: jacob_cannell 02 September 2010 08:01:22PM *  4 points [-]

What boggles my mind is the assertion that emulation is sufficient to make a mind conscious -- that there exists a particular bunch of equations and algorithms such htat when they are written on a piece of paper they are almost certainly non-conscious, but when they are run through a Turing machine they are almost certainly conscious

There are several reasons this is mind boggling, but they stem from a false intuition pump - consciousness like your own requires vastly more information than could be written down on a piece of paper.

Here is a much better way of thinking about it. From physics and neuroscience etc we know that the pattern identity of human-level consciousness (as consciousness isn't a simple boolean quality) is essentially encoded in the synaptic junctions, and corresponds to about 10^15 bits (roughly). Those bits are you.

Now if we paused your brain activity with chemicals, or we froze it, you would cease to be conscious, but would still exist because there is the potential to regain conscious activity in the future. So consciousness as a state is an active computational process that requires energy.

So in the end of the day, consciousness is a particular computational process(energy) on a particular arrangement of bits(matter).

There are many other equivalent ways of representing that particular arrangement, and the generality of turing machines is such that a sufficiently powerful computer is an arrangement of mass(bits) that with sufficient energy(computation) can represent any other system that can possibly exist. Anything. Including human consciousness.

Comment author: khafra 02 September 2010 05:45:26PM 2 points [-]

I think you've successfully analyzed your beliefs, as far as you've gone--it does seem that "substrate independence" is something you don't believe in. However, "substrate independence" is not an indivisible unit; it's composed of parts which you do seem to believe in.

For instance, you seem to accept that the highly detailed model of EY, whether that just means functionally emulating his neurons and glial cells, or actually computing his hamiltonian, will claim to be him, for much the same reason he does. If we then simulate, at whatever level appropriate to our simulated EY, a highly detailed model of his house and neighborhood that evolves according to the same rules that the real life versions do, he will think the same things regarding these things that the real life EY does.

If we go on to simulate the rest of the universe, including all the other people in it, with the same degree of fidelity, no observation or piece of evidence other than the anthropic could tell them they're in a simulation.

Bear in mind that nothing magic happens when these equations go from paper to computer: If you had the time and low mathematical error rate and notebook space to sit down and work everything out on paper, the consequences would be the same. It's a slippery concept to work one's intuition around, but xkcd #505 gives as good an intuition pump as I've seen.

Comment author: Sniffnoy 02 September 2010 10:14:01PM 1 point [-]

By "real," I mean made of "stuff." I get through a typical day and navigate my ordinary world by assuming that there is a distinction between "stuff" (matter-energy) and "ideas" (ways of arranging the matter-energy in space-time).

I don't think you can make this distinction meaningful. After all, what's an electron? Just a pattern in the electron field...

Comment author: jacob_cannell 02 September 2010 07:53:44PM *  0 points [-]

If there isn't one, why do you assume computationalism? I find it stunningly implausible that the mere specification of formal relationships among abstract concepts is sufficient to reify those concepts, i.e., to cause them to actually exist.

This isn't actually what I meant by computationalism (although I was using the word from memory, and my concept may differ from the philosopher's definition).

The idea that mere specification of formal relationships, that mere math in theory, can cause worlds to exist is a separate position than basic computationalism, and I don't buy it.

A formal mathematical system needs to actually be computed to be real. That is what causes time to flow in the child virtual universe. And in our physics, that requires energy in the parent universe. It also requires mass to represent bits. So computation can't just arise out of nothing - it requires computational elements in a parent universe organized in the right way.

khafra's replies are delving deeper into the philosophical background, so I don't need to add much more

Comment author: PhilGoetz 31 August 2010 04:42:23PM 3 points [-]

Please make the font size consistent.

Comment author: jacob_cannell 31 August 2010 09:39:27PM 1 point [-]

Done. Sorry about that.

Comment author: lmnop 31 August 2010 03:56:56AM 5 points [-]

The changes in font are distracting.

Comment author: Spurlock 31 August 2010 06:20:21PM 0 points [-]

Also the formatting somehow lays complete waste to the page layout when viewed in Opera. Makes me think there might be a bug in whatever code should be escaping HTML entities.

Comment author: PhilGoetz 31 August 2010 04:40:57PM 2 points [-]

The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.

That's an interesting point! At least, it's more interesting than Tipler's way of arriving at that conclusion.

If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.

See my response to the claim that the anthropic argument suggests it is highly improbable that you would find yourself to be a human. I don't know if this holds water, but basically: If the amount of information processing done by entities operating in the root world (excluding those running simulations) is much greater than the amount of information processing dedicated to simulation, then one "unit of consciousness" is more likely to find itself in the root world.

The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes - complete with all the entailed trappings such as conscious simulated entities.

No. However much computational power you have, you have a tradeoff between resolution, and the complexity of the simulated entity. What you're saying is like a bacteria arguing that the universe is full of finely-detailed simulations of bacteria.

Also, AIXI is a tool for exploring the structure of the universe of algorithms. No one would ever actually run AIXI, unless they had infinite computational power per second. They would always be better off using more-efficient algorithms to get more done.

Comment author: jacob_cannell 31 August 2010 09:59:29PM *  0 points [-]

That's an interesting point! At least, it's more interesting than Tipler's way of arriving at that conclusion.

I found his earlier work with Barrow, The Anthropic Cosmological Principle, to be filled with interesting useful knowledge, if not overly detailed - almost like a history of science in one book. But then with his next two books you can just follow the convulsions as he runs for the diving board and goes off the deep end.

His take on extending Chardin's Omega Point idea with computationalism isn't all that bad itself, but he really stretches logic unnecessarily to make it all fit neatly into some prepacked orthodox Christianity memetic happy meal. That being said, there is an interesting connection between the two, but I don't buy Tipler's take on it.

See my response to the claim that the anthropic argument suggests it is highly improbable that you would find yourself to be a human. I don't know if this holds water, but basically: If the amount of information processing done by entities operating in the root world (excluding those running simulations) is much greater than the amount of information processing dedicated to simulation, then one "unit of consciousness" is more likely to find itself in the root world.

Read the response involving the ants and stuff. I don't hold much weight to that train of thought. I agree that consciousness is a fluid fuzzy concept, but I also know based on my own capacities and understanding of physics that an ant colony probably could not encode me (I'm not entirely sure, but I'm highly skeptical). Also, the end point of your argument leads to the realization that ants have less intelligence/mass than humans - and its actually much less than your basic analysis, because you have to factor in connectivity measures.

No. However much computational power you have, you have a tradeoff between resolution, and the complexity of the simulated entity. What you're saying is like a bacteria arguing that the universe is full of finely-detailed simulations of bacteria.

There's always a tradeoff, this is true, but one should avoid grossly overestimating the computational costs of simulations of various fidelity. For instance, we know that fully nearly perfect deterministic simulation of a current computer requires vastly less computation than molecular level simulation - with full knowledge of its exact organization. Once we understand the human mind's algorithms, we should be able to simulate human minds to fairly high accuracy using computers of only slightly more complexity (than the brain itself).

Take that principle and combine it with a tentative estimate from pure simulation theory and computer graphics that the ultimate observer-relative simulation algorithm requires only constant time and space proportional to the intelligence and sensor capacity of the observer's mind.

If you then work out the math (which would take a longer article length discussion), you could simulate an entire earth with billions of humans using a laptop sized ultimate computer. And you could simulate millienia in seconds.

A key point though is beyond just general vague approximate statistical simulations, you eventually get to a point where you need to simulate real intelligences. And real intelligences have complexity similar to computers - vague simulation is very vague until you reach this critical point of scale separation, at which point you can simulate it near perfectly.

But at this point of scale separation, the simulated (computer, mind) becomes isomorphic and indistinguishable from the original system, and this is necessarily always true. Simulations of complex systems become suddenly vastly more effective per computational cost when you hit the scale separation level.

This falls out of computation theory directly, but just think of the what the ultimate simulation of your desktop is - its an equivalent program that has all the data and software - the molecules are completely 100% irrelevant. But that ultimate simulation is an emulation - it is an equivalent copy of the software itself.

Also, AIXI is a tool for exploring the structure of the universe of algorithms. No one would ever actually run AIXI, unless they had infinite computational power per second. They would always be better off using more-efficient algorithms to get more done.

Right - AIXI is a limit theorem, but its a dumb algorithm. We already do far better (in terms of efficiency) with mammalian cortex and current computers, and we have just started down a long, long, exponential road.

Comment author: PhilGoetz 01 September 2010 04:49:13PM 1 point [-]

If you then work out the math (which would take a longer article length discussion), you could simulate an entire earth with billions of humans using a laptop sized ultimate computer. And you could simulate millienia in seconds.

I don't think you got what I was trying to say about bacteria. If you have enough computing power to simulate more than a universe full of humans, you would likely instead use it to simulate a smaller number of much-more-complex beings. You can always use more computing power to up the complexity of the thing you're studying; hence, you never use AIXI. Your original argument implied that, once you've gotten to the human level, there's no place up to go from there; so you simulate vast quantities of them in exact detail.

Comment author: jacob_cannell 01 September 2010 07:08:53PM *  0 points [-]

I never intended to imply that future hyper-intelligences will spend all of their computational power to simulate humans, not even close. But nor will they spend it all on simulating bacteria or other hyper-intelligences.

In general, a universal hyperintelligence will simulate slices of the entire universe from the big bang to the end of time. It will certainly not simulate all regions of space-time at equal fidelity of course. In general I expect observer-relevant simulation, with fidelity falling off nonlinearly with distance corresponding to the locality of physics.

The other principle I expect them to use is more difficult to precisely quantify, but it amounts to what I would call future-historical-priority. This is the notion that not all physical events in the same region of space-time have equal importance. In fact, the importance is massively non-uniform, and this notion is related to complexity theory itself. Simulating complex things at high accuracy (such as humans, computers, etc) is vastly more important for future accuracy then simulating the interior of earth, the sun, bacteria, etc etc. The complexity and cost of accurate simulation will increase with time and technological progress.

So in general, I expect future hyper-intelligences to use what we could call Universal Approximate Simulation. More speculatively, I also expect that the theory of UAS relates directly to practical AGI. Tentatively, you can imagine UAS as a spectrum of algorithms and sub-algorithms. One side of this spectrum are arbitrarily accurate physics-inspired approaches such as those we use in big physics simulations and computer graphics, and on the other side of the spectrum are input-data driven statistical approximations using learning techniques (similar to what mammalian cortex uses). I expect future hyper-intelligences will have a better theoretical understanding of this spectrum, and where and when different simulation approaches are more efficient.

Tentatively, I expect that we will eventually find that there is no single most-efficient algorithm for UAS, and that it is instead a vast space of algorithms, with different subpoints in that space having varying utility depending on desired simulation fidelity scale. I draw this tentative conclusion from what we can currently learn from graphics research and simulation theory, and the observation that statistical learning approaches are increasingly taking residence in the space of useful algorithms for general world simulation.

At this point people usually start talking about chaos and turbulence. Simulating chaos to high accuracy is always a waste of computation. In theory a butterfly in moscow could alter the weather and change history, but the historical utility divided by the computational cost makes it astronomically less efficient than simulating say the interiors of human skulls to high fidelity.

This also implies that one run through your simulation history just gives you one sampling of the space, but with monte carlo simulation and a number of runs all the random events tend to balance out and you get a good idea of the future evolution of the system.

Random events such as the weather have had great historical impact, but most of the big ones are in distant history, and as we enter the modern era their effect is increasingly damped.

The big 'weather' random events if you were going to rank them, would start with say KT level impacts (which are some of the singular most important events in history), then progress down to glaciation events, volcanoes, etc - earthquakes and hurricanes being down lower on the list.

Comment author: Vladimir_Nesov 30 August 2010 10:28:15PM 3 points [-]

Now, the question remains: Does AIXI imply Ray Kurzweil?

Comment author: jacob_cannell 31 August 2010 12:06:44AM 0 points [-]

He certainly does have a tendency to talk like a robot.

Comment author: JamesAndrix 31 August 2010 04:38:19AM 2 points [-]
Comment author: jacob_cannell 31 August 2010 06:30:11AM *  2 points [-]

That post was rational until about half-way through - yes any simulation detailed enough to actually predict what a mind would do to high accuracy necessarily becomes equivalent to that mind itself. This is nothing new, its just a direct result of computationalism.

The only way to fully predict what any physical system will do is to fully simulate it, and fully simulating a computational system is equivalent to making a copy of it (in an embedded pocket universe). The only way to fully know what a program will do in general given some configuration of its memory is to simulate the whole thing - which is equivalent to making a copy of it.

So when I got to the person predicate idea: "We need a nonperson predicate - a predicate that returns 1 for anything that is a person, and can return 0 or 1 for anything that is not a person." ... I had to stop.

Even if a future intelligence had such a predicate (and its kinda silly to think something as complex as 'personhood' can be simplified down to a boolean variable), its a supreme folly of anthropomorphic reasoning to assume future potential hyperintelligences will cripple their intelligence just because we humans today may have ethical issues with being instantiated inside the mind of a more powerful intelligence.

Comment author: JamesAndrix 31 August 2010 07:00:42AM 6 points [-]

You misunderstand. I wish I could raise a flag that would indicate in some non accusatory or judgemental way that I'm pretty sure you are wrong about something very very important. (perhaps the key is just to emphasize that this topic is vastly more important than I am capable of being sure about anything.)

The reason we want to create a nonperson predicate is that we want to create an initial AI which will cripple itself, at least until it can determine for sure that uncrippling itself is the right thing to do. Otherwise we risk creating a billion hellworlds on our first try at fixing things.

This concept doesn't say much about whether we are currently a simulation or what kind, but it does say a little. In that if our world does it right, and it is in fact wrong to simulate a world like this, then we are probably not a simulation by a future with a past just like our present. (Because if we did it right, they probably did it right, and never simulated us.)

Yes, I currently think nonperson predicates should be non-binary and probabilistic, and integrate quality of life estimates. A 35% chance that a few simulations will be morally relevant on par with a human and will have pleasant experiences if they are - is totally acceptable if that's the best way for the AI to figure out how to fix the outside world.

But the point is you have to know you're doing that beforehand, and it has to be worth it. You do not want to create a trillion half broken souls accidentally.

Comment author: jacob_cannell 31 August 2010 07:18:47AM 3 points [-]

Ok, so I was thinking more along the lines of how this all applies to the simulation argument.

As for the nonperson predicate as an actual moral imperative for us in the near future ..

Well overall, I have a somewhat different perspective:

  1. To some (admittedly weak degree), we already violate the nonperson predicate today. Yes, our human minds do. But that its a far more complex topic.
  2. If you do the actual math, "a trillion half broken souls" is pretty far into the speculative future (although it is an eventual concern). There are other ethical issues that take priority because they will come up so much sooner.
  3. Its not immediately clear at all that this is 'wrong', and this is tied to 1.

Look at this another way. The whole point of simulation is accuracy. Lets say some future AI wants to understand humanity and all of earth, so it recreates the whole thing in a very detailed Matrix-level sim. If it keeps the sim accurate, that universe is more or less similar to one branch of the multiverse that would occur anyway.

Unless the AI simulates a worldline where it has taken some major action. Even then, it may not be unethical unless it eventually terminates the whole worldline.

So I don't mean to brush the ethical issues under the rug completely, but they clearly are complex.

Another important point: since accurate simulation is necessary for hyperintelligence, this sets up a conflict where ethics which say "don't simulate intelligent beings" cripple hyper-intelligence.

Evolution will strive to eliminate such ethics eventually, no matter what we currently think. ATM, I tend to favor ethics that are compatible with or derived from evolutionary principles.

Comment author: JamesAndrix 31 August 2010 04:08:25PM 7 points [-]

Evolution can only work if there is variation and selection amongst competition. If a single AI undergoes an intelligence explosion, it would have no competition (barring Aliens for now), would not die, and would not modify it's own value system, except in ways in accordance with it's value system. What it wants will be locked in

As we are entities currently near the statuses of "immune from selection" and "able to adjust our values according to our values" we also ought to further lock in our current values and our process by which they could change. Probably by creating a superhuman AI that we are certain will try to do that. (Very roughly speaking)

We should certainly NOT leave the future up to evolution. Firstly because 'selection' of >=humans is a bad thing, but chiefly because evolution will almost certainly leave something that wants things we do not want in charge.

We are under no rationalist obligation to value survivability for survivability's sake. We should value the survivability of things which carry forward other desirable traits.

Comment author: jacob_cannell 31 August 2010 11:46:24PM *  -1 points [-]

Evolution can only work if there is variation and selection amongst competition

Yes, variation and selection are the fundements of systemic evolution. Without variation and selection, you have stasis. Variation and selection are constantly at work even within minds themselves, as long as we are learning. Systemic evolution is happening everywhere at all scales at all times, to varying degree.

If a single AI undergoes an intelligence explosion, it would have no competition (barring Aliens for now), would not die, and would not modify it's own value system, except in ways in accordance with it's value system. What it wants will be locked in

I find almost every aspect of this unlikely:

  1. single AI undergoing intelligence explosion is unrealistic (physics says otherwise)
  2. there is always competition eventually (planetary, galactic, intergalactic?)
  3. I also don't even give much weight to 'locked in values'

As we are entities currently near the statuses of "immune from selection" a

Nothing is immune to selection. Our thoughts themselves are currently evolving, and without such variation and selection, science itself wouldn't work.

We should certainly NOT leave the future up to evolution.

Perhaps this is a difference of definition, but to mean that sounds like saying "we should certainly NOT leave the future up to the future time evolution of the universe"

Not to say we shouldn't control the future, but rather to say that even in doing so, we are still acting as agents of evolution.

We are under no rationalist obligation to value survivability for survivability's sake. We should value the survivability of things which carry forward other desirable traits.

Of course. But likewise, we couldn't easily (nor would we want to) lock in our current knowledge (culture, ethics, science, etc etc) into some sort of stasis.

Comment author: JamesAndrix 01 September 2010 01:04:32AM 1 point [-]

What does physics say about a single entity doing an intelligence explosion?

In the event of alien competition, our AI should weigh our options according to our value system.

Under what conditions will a superintelligence alter it's value system except in accordance with it's value system? Where does that motivation come from? If a superintelligence prefers it's values to be something else, why would it not change it's preferences?

If it does, and the new preferences cause it to again want to modify its preferences, and so on again, will some sets of initial preferences yield stable preferences? or must all agents have preferences that would cause them to modify their preferences if possible?

Science lets us modify our beliefs in an organized and more reliable way. It could in principle be the case that a scientific investigation leads you to the conclusion that we should use other different rules, because they would be even better than what we now call science. But we would use science to get there, or whatever our CURRENT learning method is. Likewise we should change our values according to what we currently value and know.

We should design AI such that if it determines that we would consider 'personal uniqueness' extremely important if we were superintelligent, then it will strongly avoid any highly accurate simulations, even if that costs some accuracy. (Unless outweighed by the importance of the problem it's trying to solve.)

If we DON'T design AI this way, then it will do many things we wouldn't want, well beyond our current beliefs about simulations.

Comment author: jacob_cannell 01 September 2010 02:51:06AM 1 point [-]

What does physics say about a single entity doing an intelligence explosion?

A great deal. I discussed this in another thread, but one of the constraints of physics tells us that the maximum computational efficiency of a system, and thus its intelligence, is inversely proportional to its size (radius/volume). So its extraordinarily unlikely, near zero probability i'd say, that you'll have some big global distributed brain with a single thread of consciousness - the speed of light just kills that. The 'entity' would need to be a community (which certainly still can be coordinated entities, but its fundamentally different than a single unified thread of thought).

Moreover, I believe the likely scenario is evolutionary:

The evolution of AGI's will follow a progression that goes from simple AGI minds (like those we have now in some robots) up to increasingly complex variants and finally up to human-equivalent and human-surpassing. But all throughout that time period there will be many individual AGI's, created by different teams, companies, and even nations, thinking in different languages, created for various purposes, and nothing like a single global AI mind. And these AGI's will be competing with both themselves and humans - economically.

I agree with most of the rest of your track of thought - we modify our beliefs and values according to our current beliefs and values. But as I said earlier, its not static. Its also not even predictable. Its not even possible, in principle, to fully predict your own future state. This to me, is perhaps the final nail in the coffin for any 'perfect' self-modifying FAI theory.

Moreover, I also find it highly unlikely that we will ever be able to create a human level AGI with any degree of pre-determined reliability about its goal system whatsoever.

I find it more likely that the AGI's we end up creating will have to learn ethics, morality, etc - their goal systems can not be hard coded, and whether they turn out friendly or not is entirely dependent on what they are taught and how they develop.

In other words, friendliness is not an inherent property of AGI designs - its not something you can design in to the algorithms itself. The algorithms for an AGI give you something like an infant brain - its just a canvas, its not even a mind yet.

Comment author: JamesAndrix 02 September 2010 01:18:17AM 2 points [-]

I find it more likely that the AGI's we end up creating will have to learn ethics, morality, etc - their goal systems can not be hard coded, and whether they turn out friendly or not is entirely dependent on what they are taught and how they develop.

On what basis will they learn? You're still starting out with an initial value system and process for changing the value system, even if the value system is empty. There is no reason to think that a given preference-modifier will match humanity's. Why will they find "Because that hurts me" to be a valid point? Why will they return kindness with kindness?

You say the goal systems can't be designed in, why not?

It may be the case that we will have a wide range of semifriendly subhuman or even near human AGI's. But when we get a superhuman AGI that is smart enough to program better AGI, why can it not do that on it's own?

I am positive that 'single entity' should not have mapped to 'big distributed global brain'.

But I also think an AIXI like algorithm would be easy to parallelize and make globally distributed, and it still maximizes a single reward function.

Comment author: jacob_cannell 02 September 2010 02:17:57AM *  0 points [-]

On what basis will they learn? You're still starting out with an initial value system and process for changing the value system, even if the value system is empty.

They will have to learn by amassing a huge amount of observations and interactions, just as human infants do, and just as general agents do in AI theory (such as AIXI).

Human brains are complex, but very little of that complexity is actually precoded in the DNA. For humans values, morals, and high level goals are all learned knowledge, and have varied tremendously over time and cultures.

Why will they return kindness with kindness?

Well, if you raised the AI as such, it would.

Consider that a necessary precursor of of following the strategy 'returning kindness with kindness' is understanding what kindness itself actually is. If you actually map out that word, you need a pretty large vocabulary to understand it, and eventually that vocabulary rests on grounded verbs and nouns. And to understand those, they must be grounded on a vast pyramid of statistical associations acquired from sensorimotor interaction (unsupervised learning .. aka experience). You can't program in this knowledge. There's just too much of it.

From my understanding of the brain, just about every concept has (or can potentially have) associated hidden emotional context: "rightness" and "wrongness", and those concepts: good, bad, yes, no, are some of the earliest grounded concepts, and the entire moral compass is not something you add later, but is concomitant with early development and language acquisition.

Will our AI's have to use such a system as well?

I'm not certain, but it may be such a nifty, powerful trick, that we end up using it anyway. And even if there is another way to do that is still efficient, it may be that you can't really understand human languages unless you also understand the complex web of value. If nothing else, this approach certainly gives you control over the developing AI's value system. It appears for human minds the value system is immensely complex - it is intertwined at a fundamental level with the entire knowledge base - and is inherently memetic in nature.

But when we get a superhuman AGI that is smart enough to program better AGI, why can it not do that on it's own?

What is an AGI? It is a computer system (hardware), some algorithms/code (which actually is always eventually better to encode directly in hardware - 1000X performance increase), and data (learned knowledge). The mind part - all the qualities of importance, comes solely from the data.

So the 'programming' of the AI is not that distinguishable from the hardware design. I think AGI's will speed this up, but not nearly as dramatically as people here think. Remember humans don't design new computers anymore anyway. Specialized simulation software does the heavy lifting - and it is already the bottleneck. An AGI would not be better than this specialized software at its task (generalized vs specialized). It will be able to improve it some almost certainly, but only to the theoretical limits, and we are probably already close enough to them that this improvement will be minor.

AGI's will have a speedup effect on moore's law, but I wouldn't be surprised if this just ends up compensating for the increased difficulty going forward as we approach quantum limits and molecular computing.

In any case, we are simulation bound already and each new generation of processors designs (through simulation) the next. The 'FOOM' has already begun - it began decades ago.

But I also think an AIXI like algorithm would be easy to parallelize and make globally distributed, and it still maximizes a single reward function.

Well I'm pretty certain that AIXI like algorithms aren't going to be directly useful - perhaps not ever, only more as a sort of endpoint on the map.

But that's beside the point.

If you actually use even a more practical form of that general model - a single distributed AI with a single reward function and decision system, I can show you how terribly that scales. Your distributed AI with a million PC's is likely to be less intelligent than a single AI running on tightly integrated workstation class machine with just say 100x the performance of one of your PC nodes. The bandwidth and the latency issues are just that extreme.

Comment author: rhollerith_dot_com 31 August 2010 08:52:45AM *  3 points [-]

The only way to fully know what a program will do in general given some configuration of its memory is to simulate the whole thing - which is equivalent to making a copy of it.

And the probability that a sufficiently intelligent agent will ever need to fully know what a program will do is IMHO negligible. If the purpose of the program is to play chess, for example, the agent probably only cares that the program does not persist in making an illegal move and that it gets as many wins and draws as possible. Even if the agent cares about more than just that, the agent cares only about a small, finite list of properties.

If the purpose of the program is to keep track of bank balances, the agent again only cares whether the program has a small, finite list properties: e.g., whether it disallows unauthorized transactions, whether it ensures that every transaction leaves an audit trail and whether the bank balances and accounts obey "the law of the conservation of money".

It is emphatically not true that the only way to know whether a program has those properties is to run or simulate the program.

Could it be that you are interpreting Rice's theorem too broadly? Rice's theorem says that there is always some program that cannot be classified correctly as to whether it has some property. But programmers just pick programs that can be classified correctly, and this always proves possible in practice.

In other words, if the programmer wants his program to have properties X, Y, and Z, he simply picks from the class of programs that can be classified correctly (as to whether the program has properties X, Y and Z) and this is straightforward and not something an experienced programmer even has consciously to think about unless the "programmer" (who in that case is really a theory-of-computing researcher) was purposefully looking for a set of properties that cannot be satisfied by a program.

Now it is true that human programmers spend a lot of time testing their programs and "simulating" them in debuggers, but there is no reason that all the world's programs could not be delivered without doing any of that: those techniques are simply not necessary to delivering code that is assured to have the properties desired by our civilization.

For example, if there were enough programmers with the necessary skills, every program could be delievered with a mathematical proof that it has the properties that it was intended to have, and this would completely eliminate the need to use testing or debugging. (If the proof and the program are developed at the same time, the "search of the space of possible programs" naturally avoids the regions where one might run into the limitation described in Rice's theorem.)

There are in fact not enough programmers with the necessary skills to deliver such "correctness proofs" for all the programs that the world's programmers currently deliver, but superintelligences will not suffer from that limitation. IMHO they will almost never resort to testing and debugging the programs they create. They will instead use more efficient techniques.

And if a superintelligence -- especially one that can improve its own source code -- happens on a program (in source code form or in executable form), it does not have to run, execute or simulate the program to find out what it needs to find out about it.

Virtual machines, interpreters and the idea of simulation or program execution are important parts of curren technology (and consequently current intellectual discourse) only because human civilization does not yet have the intellectual resources to wield more sophisticated techniques. To reach this conclusion, it was sufficient for me to study of the line of research called "programming methodology" or axiomatic semantics which began in the 1960s with John McCarthy, R.W. FLoyd, C.A.R. Hoare and Dijkstra.

Note also that what is now called discrete-event simulation and what was in the early decades of computing called simply "simulation" has shrunk in importance over the decades as humankind has learned more sophisticated and more productive ways (e.g., statistical machine learning, which does not involve the simulation of anything) of using computers.

Comment author: jacob_cannell 31 August 2010 11:56:02PM 1 point [-]

And the probability that a sufficiently intelligent agent will ever need to fully know what a program will do is IMHO negligible.

Err what? This isn't even true today. If you are building a 3 billion transistor GPU, you need to know exactly how that vastly complex physical system works (or doesn't), and you need to simulate it in detail, and eventually actually physically build it.

If you are making a software system, again you need to know what it will do, and you can gain approximate knowledge with various techniques, but eventually you need to actually run the program itself. There is no mathematical shortcut (halting theorem for one, but its beyond that).

Your vision of programmers working without debuggers and hardware engineers working without physical simulations and instead using 'correctness proofs', is in my view, unrealistic. Although if you really do have a much better way, perhaps you should start a company.

Comment author: rhollerith_dot_com 01 September 2010 01:42:01AM *  1 point [-]

You are not engaging deeply with what I said, Jacob.

For example, you say, "This is not even true today," (emphasis mine) which strongly suggests that you did not bother to notice that I acknowledged that simulations, etc, are needed today (to keep costs down and to increase the supply of programmers and digital designers -- most programmers and designers not being able to wield the techniques that a superintelligence would use). It is after the intelligence explosion that simulations, etc, almost certainly become obsolete IMO.

Since writing my last comment, it occurs to me that the most unambiguous and cleanest way for me to state my position is as follows.

Suppose it is after the intelligence explosion and a superintelligence becomes interested in a program or a digital design like a microprocessor. Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI's interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.

The way I became confident in that position is through what (meager compared to some LWers) general knowledge I have of intelligence and superintelligence (which it seems that you have, too) combined with my study of "programming methodology" -- i.e, research into how to develop a correctness proof simultaneously with a program.

I hasten to add that there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything -- although I would not want to have to imagine what they would be.

Correctness proofs (under the name "formal verification") are already heavily used in the design of new microprocessors BTW. I would not invest in a company whose plan to make money is to support their use because I do not expect their use to grow quickly because the human cognitive architecture is poorly suited to their use compared to more mainstream techniques that entail running programs or simulating designs. In fact, IMHO the mainstream techniques will continue to be heavily used as long as our civilization relies on human designers with probability .9 or so.

Comment author: jacob_cannell 01 September 2010 03:09:26AM 0 points [-]

Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI's interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends.

Err no. Actually the SI would be smart enough to understand that the optimal algorithm for perfect simulation of a physical system requires: 1. a full quantum computer with at least as many qubits as the original system 2. at least as much energy and time than the original system

In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can't be certain 100% that it will work until you actually build it.

That being said, the next best thing, the closest program is a very close approximate simulation.

From wikipedia on "formal verification" the links mention that the cost for formally verifying large software in the few cases that it was done were astronomical. It mentions they are used for hardware design, but I'm not sure how that relates to simulation - I know extensive physical simulation is also used. It sounds like from the wiki formal verification can remove the need for simulating all possible states. (note in my analysis above I was considering only simulating one timeslice, not all possible configurations - thats obviously far far worse). So it sounds like formal verification is a tool building on top of physical simulation to reduce the exponential explosion.

You can imagine that:

there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything -- although I would not want to have to imagine what they would be.

But imagining things alone does not make them exist, and we know from current theory that absolute physical knowledge requires perfect simulation. There is a reason why we investigate time/space complexity bounds. No SI, no matter how smart, can do the impossible.

Comment author: timtyler 01 September 2010 07:41:41AM 1 point [-]

In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can't be certain 100% that it will work until you actually build it.

You can't be 100% certain even then. Testing doesn't produce certainty - you usually can't test every possible set of input configurations.

Comment author: rhollerith_dot_com 01 September 2010 02:18:39AM *  0 points [-]

There is no mathematical shortcut (halting theorem for one, but its beyond that).

A program is chosen from a huge design space, and any effective designer will choose a design that pessimizes the mental labor needed to understand the design. So, although there are quite simple Turing machines that no human can explain how it works, Turing machines like them simply do not get chosen by designers who do want to understand their design.

The halting theorem says that you can pick a program that I cannot tell whether it halts on every input. EDIT. Or something like that: it has been a while. The point is that the halting theorem does not contradict any of the sequence of statements I am going to make now.

Nevertheless, I can pick a program that does halt on every input. ("always halts" we will say in the future.)

And I can a pick a program that sorts its input tape before it (always) halts.

And I can pick a program that interprets its input tape as a list of numbers and outputs the sum of the numbers before it (always) halts.

And I can pick a program that interprets its input tape as the coefficients of a polynomial and outputs the zeros of the polynomial before it (always) halts.

Etc. See?

And I can know that I have successfully done these things without ever running the programs I picked.

Well, here. I do not have the patience to define or write a Turing machine, but here is a Scheme program that adds a list of numbers. I have never run this program, but I will give you $10 if you can pick an input that causes it to fail to halt or to fail to do what I just said it will do.

(define (sum list) (cond ((equal '() list) 0) (#t (+ (car list) (sum (cdr list))))))

Comment author: wnoise 01 September 2010 04:32:25AM -1 points [-]

Well, that's easy -- just feed it a circular list.

Comment author: rhollerith_dot_com 01 September 2010 04:50:55AM *  0 points [-]

Well, that's easy -- just feed it a circular list.

Nice catch, wnoise.

But for those following along at home, if I had been a more diligent in my choice, (i.e., if instead of "Scheme", I had said, "a subset of Scheme, namely, Scheme without circular lists") there would have been no effective answer to my challenge.

So, my general point remains, namely, that a sufficiently careful and skilled programmer can deliver a program guaranteed to halt and guaranteed to have the useful property or properties that the programmer intends it to have without the programmer's ever having run the program (or ever having copied the program from someone who ran it).

Comment author: Strange7 01 September 2010 05:14:02AM 2 points [-]

if I had been a more diligent

And that's why humans will continue to need debuggers for the indefinite future.

Comment author: rhollerith_dot_com 01 September 2010 05:55:10AM *  1 point [-]

And that is why wnoise used a debugger to find a flaw in my position. Oh, wait! wnoise didn't use a debugger to find the flaw.

(I'll lay off the sarcasm now, but give me this one.)

Also: I never said humans will stop needing debuggers.

Comment author: jacob_cannell 01 September 2010 05:00:57AM 0 points [-]

Sure it is possible to create programs that can be formally verified, and even to write general purpose verifiers. But thats not directly related to my point about simulation.

Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn't true, it would be a magical shortcut around all kinds of complexity theorems.

So in general, the most efficient way to certainly predict the complete future output state of some complex program (such as a complex computer system or a mind) is to run that program itself.

Comment author: rhollerith_dot_com 01 September 2010 06:45:28AM *  1 point [-]

Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn't true, it would be a magical shortcut around all kinds of complexity theorems.

I agree with that, but it does not imply there will be a lot of agents simulating agents after the intelligence explosion if simulating means determining the complete future behavior of an agent. There will be agents doing causal modeling of agents. Causal modeling allows the prediction of relevant properties of the behavior of the agent even though it probably does not allow the prediction of the complete future behavior or "complete future output state" of the agent. But then almost nobody will want to predict the complete future behavior of an agent or a program.

Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?

Comment author: jacob_cannell 01 September 2010 06:44:01PM *  1 point [-]

But then almost nobody will want to predict the complete future behavior of an agent or a program.

Of course they do. But lets make our language more concise and specific.

Its not computationally tractable to model the potentially exponential set of the complete future behavior of a particular program (which could include any physical system, from a car, to a chess program, to an intelligent mind) given any possible input.

But that is not what I have been discussing. It is related, but tangentially.

If you are designing an airplane, you are extremely interested in simulating its flight characteristics given at least one 'input' configuration that system may eventually find itself in (such as flying at 20,000 ft in earth's atmosphere).

If you are designing a program, you are extremely interested in simulating exactly what it does given at least one 'input' configuration that system may eventually find itself in (such as what a rendering engine will do given a description of a 3D model).

So whenever you start talking about formal verification and all that, you are talking past me. You are talking about the even vastly more expensive task of predicting the future state of a system over a large set (or even the entire set) of its inputs - and this is necessarily more expensive than what I am even considering.

If we can't even agree on that, there's almost no point of continuing.

Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?

So lets say you have a chess-playing program, and I develop a perfect simulation of your chess playing program. Why is that interesting? Why is that useful?

Because I can use my simulation of your program to easily construct a program that is strictly better at chess than your program and dominates it in all respects.

This is directly related to the evolution of intelligence in social creatures such as humans. A 'smarter' human that can accurately simulate the minds of less intelligent humans can strictly dominate them socially: manipulate them like chess pieces.

Are we still talking past each other?

Intelligence is simulation.

Comment author: rhollerith_dot_com 01 September 2010 06:38:57AM *  1 point [-]

Sure it is possible to create programs that can be formally verified

Formal verification is not the point: I did not formally verify anything.

The point is that I did not run or simulate anything, and neither did wnoise in answering my challenge.

We all know that humans run programs to help themselves find flaws in the programs and to help themselves understand the programs. But you seem to believe that for an agent to create or to understand or to modify a program requires running the program. What wnoise and I just did shows that it does not.

Ergo, your replies to me do not support your position that the future will probably be filled with simulations of agents by agents.

And in fact, I expect that there will be almost no simulations of agents by agents after the intelligence explosion for reasons that are complicated, but which I have said a few paragraphs about in this thread.

Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will do because there will be more efficient ways to do whatever needs doing -- and in particular "predicting the complete output state" of an agent will almost never need doing.

Comment author: jacob_cannell 01 September 2010 06:52:31PM *  1 point [-]

Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will d

I feel like you didn't read my original post. Here is the line of thinking again, condensed:

  1. Universal optimal intelligence requires simulating the universe to high fidelity (AIXI)
  2. as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity
  3. intelligence is simulation

rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don't have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility

If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.

Comment author: Vladimir_Nesov 31 August 2010 09:48:10AM *  0 points [-]

Why does this confused, applause lights-laden post have non-negative rating?

Comment author: khafra 31 August 2010 10:56:54AM 3 points [-]

Aside from the "Alien Dreams" section, besides name-checking AIXI and not having a cast of stars, the major premise of this post is the same as that of the ultimate mega meta crossover. That isn't terribly downvote-deserving, is it?

Comment author: Spurlock 31 August 2010 06:22:19PM 1 point [-]

Word to the wise: linked story contains spoilers for other works of fiction, see prologue.

Comment author: KrisC 30 August 2010 10:47:17PM *  1 point [-]

I Dream of AIXI

...does AIXI Dream of Me?

Comment author: ata 30 August 2010 10:57:20PM 6 points [-]

Only in Soviet Russia.

Comment author: Tiiba 31 August 2010 05:37:09AM 1 point [-]

I have a question. What does it mean for AIXI to be the optimal time bounded AI? If it's so great, why do people still bother with ANNs and SVNs and SOMs and KNNs and TLAs and T&As? My understanding of it is rather cloudy (as is my understanding of all but the last two of the above), so I'd appreciate clarifaction.

Comment author: CronoDAS 31 August 2010 10:11:58AM *  8 points [-]

First of all, AIXI isn't actually "the optimal time bounded AI". What AIXI is "optimal" for is coming to correct conclusions when given the smallest amount of data, and by "optimal" it means "no other program does better than AIXI in at least one possible world without also doing worse in another".

Furthermore AIXI itself uses Solomonoff induction directly, and Solomonoff induction is uncomputable. (It can be approximated, though.)

AIXItl is the time-limited version if AIXI, but it amounts to "test all the programs that you can, find the best one, and use that" - and it's only "optimal" when compared against the programs that it can test, so it's not actually practical to use, either.

(At least, that's what I could gather from reading the PDF of the paper on AIXI. Could someone who knows what they're talking about correct any mistakes?)

Comment author: gwern 31 August 2010 07:16:53AM 3 points [-]

If it's so great, why do people still bother with ANNs and SVNs and SOMs and KNNs and TLAs and T&As?

Are you familiar with Big O? See also 'constant factor'.

(You may not be interested in the topic, but an understanding of Big O and constant factors is one of the taken-for-granted pieces of knowledge here.)

Comment author: Tiiba 31 August 2010 03:56:03PM -1 points [-]

You mean that horrible Batman knockoff? I hated it.

Yeah, I know what Big O is.

Comment author: jimrandomh 31 August 2010 06:24:56AM 4 points [-]

There is an enormous difference between the formal mathematical definition of "computable", and "able to be run by a computer that could be constructed in this universe". AIXI is computable in the mathematical sense of being written as a computer program that will provably halt in finitely many steps, but it is not computable in the sense of it being possible to run it, even by using all the resources in the observable universe optimally, because the runtime complexity of AIXI is astronomically larger than the universe is.

Comment author: wedrifid 31 August 2010 11:19:53AM 13 points [-]

because the runtime complexity of AIXI is astronomically larger than the universe is.

'Astronomically'? That's the first time I've seen that superlative inadequate for the job.

Comment author: Vladimir_Nesov 31 August 2010 09:44:20AM *  5 points [-]

AIXI is computable in the mathematical sense of being written as a computer program that will provably halt in finitely many steps

AIXI's decision procedure is not computable (but AIXItl's is). (Link)

Comment author: jacob_cannell 31 August 2010 06:45:53AM 0 points [-]

Yes, I think the term in computational complexity theory is tractability, which is the practical subset of computability.

AIXI is interesting just from a philosophical perspective, but even in the practical sense it has utility in showing what the ultimate limit is, and starting there you can find approximations and optimizations that move you into the land of the tractable.

For an example analogy, in computer graphics we have full blown particle ray tracing as the most accurate theory at the limits, and starting with that and then speeding it up with approximations that minimize the loss of accuracy is a good strategy.

The monte carlo approximation to AI is tractable and it can play small games (fairly well?).

For a more practical AGI design on a limited budget, its probably best to use hierarchical approximate simulation, more along the lines of what the mammalian cortex appears to do.

Comment author: JamesAndrix 31 August 2010 04:29:24AM *  1 point [-]

The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent).

What these categories meant was not clear to me on first reading.

I currently understand AFS as something like aliens finding earlier [humanity[ and trying to predict what we will do. AHS would be the result of Aliens interacting with a more mature humanity and trying to deduce particulars about our origin, perhaps for use in an AFS.

If I have that right, PFS might not be entirely inconsistent, as one posthuman might try to fully model another, at least into the near future. Edit: oh but WE are not those simulations. (unless there is a secret government foomed AI modeling a future in which it remains secret, and it's really 1985)

Comment author: jacob_cannell 31 August 2010 06:12:05AM 0 points [-]

Yeah, PFS seems pretty unlikely ;)

Comment author: Spurlock 31 August 2010 01:44:52PM 0 points [-]

near-infinite Turing Machine

I'm not sure this is a meaningful concept.

Comment author: Baughn 31 August 2010 04:50:27PM 0 points [-]

How about "arbitrarily large; a machine whose clock-speed can be set to any finite integer value by a program running on that machine"?