Implications of the Theory of Universal Intelligence

If you hold the AIXI theory for universal intelligence to be correct; that it is a useful model for general intelligence at the quantitative limits, then you should take the Simulation Argument seriously.


AIXI shows us the structure of universal intelligence as computation approaches infinity.  Imagine that we had an infinite or near-infinite Turing Machine.  There then exists a relatively simple 'brute force' optimal algorithm for universal intelligence. 


Armed with such massive computation, we could just take all of our current observational data and then use a particular weighted search through the subspace of all possible programs that correctly predict this sequence (in this case all the data we have accumulated to date about our small observable slice of the universe).  AIXI in raw form is not computable (because of the halting problem), but the slightly modified time limited version is, and this is still universal and optimal.


The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.

AIXI’s mechanics, based on Solomonoff Induction, bias against complex programs with an exponential falloff ( 2^-l(p) ), a mechanism similar to the principle of Occam’s Razor.  The bias against longer (and thus more complex) programs, lends a strong support to the goal of String Theorists, who are attempting to find a simple, shorter program that can unify all current physical theories into a single compact description of our universe.  We must note that to date, efforts towards this admirable (and well-justified) goal have not born fruit.  We may actually find that the simplest algorithm that explains our universe is more ad-hoc and complex than we would desire it to be.  But leaving that aside, imagine that there is some relatively simple program that concisely explains our universe.

If we look at the history of the universe to date, from the Big Bang to our current moment in time, there appears to be a clear local telic evolutionary arrow towards greater X, where X is sometimes described as or associated with: extropy, complexity, life, intelligence, computation, etc etc.  Its also fairly clear that X (however quantified) is an exponential function of time.  Moore’s Law is a specific example of this greater pattern.


This leads to a reasonable inductive assumption, let us call it the reasonable assumption of progress: local extropy will continue to increase exponentially for the foreseeable future, and thus so will intelligence and computation (both physical computational resources and algorithmic efficiency). The reasonable assumption of progress appears to be a universal trend, a fundamental emergent property of our physics.


Simulations

If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.


As our future descendants expand in computational resources and intelligence, they will approach the limits of universal intelligence.  AIXI says that any such powerful universal intelligence, no matter what its goals or motivations, will create many simulations which effectively are pocket universes.  


The AIXI model proposes that simulation is the core of intelligence (with human-like thoughts being simply one approximate algorithm), and as you approach the universal limits, the simulations which universal intelligences necessarily employ will approach the fidelity of real universes - complete with all the entailed trappings such as conscious simulated entities.


The reasonable assumption of progress modifies our big-picture view of cosmology and the predicted history and future of the universe.  A compact physical theory of our universe (or multiverse), when run forward on a sufficient Universal Turing Machine, will lead not to one single universe/multiverse, but an entire ensemble of such multi-verses embedded within each other in something like a hierarchy of Matryoshka dolls.

The number of possible levels of embedding and the branching factor at each step can be derived from physics itself, and although such derivations are preliminary and necessarily involve some significant unknowns (mainly related to the final physical limits of computation), suffice to say that we have sufficient evidence to believe that the branching factor is absolutely massive, and many levels of simulation embedding are possible.

Some seem to have an intrinsic bias against the idea bases solely on its strangeness.

Another common mistake stems from the anthropomorphic bias: people tend to image the simulators as future versions of themselves.

The space of potential future minds is vast, and it is a failure of imagination on our part to assume that our descendants will be similar to us in details, especially when we have specific reasons to conclude that they will be vastly more complex.

Asking whether future intelligences will run simulations for entertainment or other purposes are not the right questions, not even the right mode of thought.  They may, they may not, it is difficult to predict future goal systems.  But those aren’t important questions anyway, as all universe intelligences will ‘run’ simulations, simply because that precisely is the core nature of intelligence itself.  As intelligence expands exponentially into the future, the simulations expand in quantity and fidelity.


The Assemble of Multiverses


Some critics of the SA rationalize their way out by advancing a position of ignorance concerning the set of possible external universes our simulation may be embedded within.  The reasoning then concludes that since this set is essentially unknown, infinite and uniformly distributed, that the SA as such thus tells us nothing. These assumptions do not hold water.

Imagine our physical universe, and its minimal program encoding, as a point in a higher multi-dimensional space.  The entire aim of physics in a sense is related to AIXI itself: through physics we are searching for the simplest program that can consistently explain our observable universe.  As noted earlier, the SA then falls out naturally, because it appears that any universe of our type when ran forward necessarily leads to a vast fractal hierarchy of embedded simulated universes.

At the apex is the base level of reality and all the other simulated universes below it correspond to slightly different points in the space of all potential universes - as they are all slight approximations of the original.  But would other points in the space of universe-generating programs also generate observed universes like our own?

We know that the fundamental constants in the current physics are apparently well-tuned for life, thus our physics is a lone point in the topological space supporting complex life: even just tiny displacements in any direction result in lifeless universes.  The topological space around our physics is thus sparse for life/complexity/extropy.  There may be other topological hotspots, and if you go far enough in some direction you will necessarily find other universes in Tegmark’s Ultimate Ensemble that support life.  However, AIXI tells us that intelligences in those universes will simulate universes similar to their own, and thus nothing like our universe.

On the other hand we can expect our universe to be slightly different from its parent due to the constraints of simulation, and we may even eventually be able to discover evidence of the approximation itself.  There are some tentative hints from the long-standing failure to find a GUT of physics, and perhaps in the future we may find our universe is an ad-hoc approximation of a simpler (but more computationally expensive) GUT theory in the parent universe.


Alien Dreams

Our   Milky Way galaxy   is vast and old, consisting of hundreds of billions of stars, some of which are more than 13 billion years old, more than three times older than our sun.  We have direct evidence of technological civilization developing in 4 billion years from simple protozoans, but it is difficult to generalize past this single example.  However, we do now have mounting evidence that planets are common, the biological precursors to life are probably common, simple life may even have had a historical presence on mars, and all signs are mounting to support the  principle of mediocrity:  that our solar system is not a precious gem, but is in fact a typical random sample.

If the evidence for the mediocrity principle continues to mount, it provides a further strong support for the Simulation Argument.  If we are not the first technological civilization to have arisen, then technological civilization arose and achieved Singularity long ago, and we are thus astronomically more likely to be in an alien rather than posthuman simulation.

What does this change?

The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent).  If we discover that we are unlikely to be the first technological Singularity, we should assume AHS and AFS dominate.  For reasons beyond this scope, I imagine that the AFS set will outnumber the AHS set.

Historical simulations would aim for historical fidelity, but future simulations would aim for fidelity to a 'what-if' scenario, considering some hypothetical action the alien simulating civilization could take.  In this scenario, the first civilization to reach technological Singularity in the galaxy would spread out, gather knowledge about the entire galaxy, and create a massive number of simulations.  It would use these in the same way that all universal intelligences do: to consider the future implications of potential actions.

What kinds of actions?  

The first-born civilization would presumably encounter many planets that already harbor life in various stages, along with planets that could potentially harbor life.  It would use forward simulations to predict the final outcome of future civilizations developing on these worlds.  It would then rate them according to some ethical/utilitarian theory (we don't even need to speculate on the criteria), and it would consider and evaluate potential interventions to change the future historical trajectory of that world: removing undesirable future civilizations, pushing other worlds towards desirable future outcomes, and so on.

At the moment its hard to assign apriori weighting to future vs historical simulation possibilities, but the apparent age of the galaxy compared to the relative youth of our sun is a tentative hint that we live in a future simulation, and thus that our history has potentially been altered.

 

New Comment
143 comments, sorted by Click to highlight new comments since: Today at 8:20 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I feel much the same about this post as I did about Roko's Final Post. It's imaginative, it's original, it has an internal logic that manages to range from metaphysics to cosmology; it's good to have some crazy-bold big-picture thinking like this in the public domain; but it's still wrong, wrong, wrong. It's an artefact of its time rather than a glimpse of reality. The reason it's nonetheless interesting is that it's an attempt to grasp aspects of reality which are not yet understood in its time - and this is also why I can't prove it to be "wrong" in a deductive way. Instead, I can only oppose my postulates to the author's, and argue that mine make more sense.

First I want to give a historical example of human minds probing the implications of things new and unknown, which in a later time became familiar and known. The realization that the other planets were worlds like Earth, a realization we might date from Galileo forwards, opened the human imagination to the idea of other worlds in the sky. People began to ask themselves: what's on those other worlds, is there life, what's it like; what's the big picture, the logic of the situation. In the present day, when robot pro... (read more)

2jacob_cannell14y
So from searching around, it looks like Roko was cosmically censored or something on this site. I don't know if thats supposed to be a warning (if you keep up this train of thought, you too will be censored), or just an observation - but again I wasn't here so I don't know much of anything about Roko or his posts. 1. we have sent robot probes to only a handful of locations in our solar system, a far cry from "most of the planets" unless you think the rest of the galaxy is a facade. (and yeah I realize you probably meant the solar system, but still). And the jury is still out on Mars - it may have had simple life on the past. We don't have enough observational data yet. Also, there may be life on europa or titan. I'm not holding my breath, but its worth mentioning. 2. Beware the hindsight bias. When we had limited observational data, it was very reasonable given what we knew then to suppose that other worlds were similar to our own. If you seriously want to argue that the principle of anthropomorphic uniqueness (that earth is a rare unique gem in every statistical measure) vs the principle of mediocrity - the evidence for the latter is quite strong. Without more observational data, we simply do not know the prior probability for life. But lacking detailed data, we should assume we are a random sample from some unknown distribution. We used to think we were in the center of the galaxy, but we are within the 95% interval middle, we used to think our system is unique to have planets, we now know that our system is typical in this sense (planets are typical), our system is not especially older or younger, etc etc etc. By all measures we can currently measure based on data we have now, our system is average. So you can say that life arises to civilization on average on only one system in a trillion, but atm it is extremely difficult to make any serious case for that, and the limited evidence strongly suggests otherwise. Based on our knowledge of our solar system, w
7Paul Crowley14y
Roko wasn't censored, he deleted everything he'd ever posted. I've independently confirmed this via contact with him outside LW.
3wedrifid14y
Roko was censored and publicly abused in and about one post but he deleted everything else himself. (That would have taken hours of real time unless he created some sort of automaton. I tried just browsing through my posts for the last few months and it took ages!)
2timtyler14y
Actually lots of people were censored - several of my comments were removed from the public record, for example - and others were totally deleted.
2Paul Crowley14y
Hmm, I didn't ask whether he'd ever had a comment deleted; what I'm confident of is that the root-and-branch removal of all his work was his own doing.
1timtyler14y
That's what he says here.

Please make the font size consistent.

2jacob_cannell14y
Done. Sorry about that.

The changes in font are distracting.

0Spurlock14y
Also the formatting somehow lays complete waste to the page layout when viewed in Opera. Makes me think there might be a bug in whatever code should be escaping HTML entities.

Sorry, what is AIXI? It was not clear to me from the linked abstract.

4jacob_cannell14y
Sorry, I should have linked to a quick overview of AIXI. Its basically an algorithm for ultimate universal intelligence, and a theorem showing the algorithm is optimal. It shows what a universal intelligence could be or should be like at the limits - given vast amounts of computation.
3Mass_Driver14y
Interesting. (1) What do you mean by "intelligence?" (2) Why would "actually running such an algorithm on an infinite Turing Machine...have the interesting side effect of actually creating all such universes?"
6jacob_cannell14y
1. The AIXI algorithm amounts to a formal mathematical definition of intelligence, but in plain english we can just say intelligence is a capacity for modelling and predicting one's environment. 2. This relates to the computability of physics and the materialist computationalist assumption in the SA itself. If we figure out the exact math underlying the universe (and our current theories are pretty close), and you ran that program on an infinite or near infinite computer, that system would be indistinguishable from the universe itself from the perspective of observers inside the simulation. Thus it would recreate the universe (albeit embedded in a parent universe). If you were to look inside that simulated universe, it would have entire galaxies, planets, humans or aliens pondering their consciousness, writing on websites, etc etc etc
1Perplexed14y
I worry that there may be an instance of the Mind Projection Fallacy involved here. You are assuming there is a one-place predicate E(X) <=> {X has real existence}. But maybe the right way of thinking about it is as a two-place predicate J(A,X)<=> {Agent A judges that X has real existence}. Example: In this formulation, Descartes's "cogito ergo sum" might best be expressed as leading to the conclusion J(me,me). Perhaps I can also become convinced of J(you,you) and perhaps even J(sim-being,sim_being). But getting from there to E(me) seems to be Mind Projection; getting to J(me, you) seems difficult; and getting to J(me, sim-being) seems very difficult. Especially if I can't also get to J(sim-being, me).
0Mass_Driver14y
Very coherent; thank you. Do your claims really depend on the optimality of AIXI? It seems to me that, using your logic, if I ran the exact math underlying the universe on, say, Wolfram Alpha, or a TI-86 graphing calculator, the simulated inhabitants would still have realistic experiences; they would just have them more slowly relative to our current frame of reality's time-stream.
-1jacob_cannell14y
No, computationalism is separate, and was more or less assumed. I discussed AIXI as interesting just because it shows that universal intelligence is in fact simulation, and so future hyperintelligences will create beings like us just by thinking/simulating our time period (in sufficient detail). And moreover, they won't have much of a choice (if they really want to deeply understand it). As to your second thought, turing machines are turing machines, so it doesn't matter what form it takes as long as it has sufficient space and time. Of course, that rules out your examples though: you'll need something just a tad bigger than a TI-86 or Wolfram Alpha (on today's machines) to simulate anything on the scale of a planet, let alone a single human brain.
0Mass_Driver14y
I think I'm finally starting to understand your article. I will probably have to go back and vote it up; it's a worthwhile point. Do you have the link for that? I think there's an article somewhere, but I can't remember what it's called. If there isn't one, why do you assume computationalism? I find it stunningly implausible that the mere specification of formal relationships among abstract concepts is sufficient to reify those concepts, i.e., to cause them to actually exist. For me, the very definitions of "concept," "relationship," and "exist" are almost enough to justify an assumption of anti-computationalism. A "concept" is something that might or might not exist; it is merely potential existence. A "relationship" is a set of concepts. I either don't know of or don't understand any of the insights that would suggest that everything that potentially exists and is computed therefore actually exists -- computing, to me, just sounds like a way of manipulating concepts, or, at best, of moving a few bits of matter around, perhaps LED switches or a turing tape, in accordance with a set of concepts. How could moving LED switches around make things real? By "real," I mean made of "stuff." I get through a typical day and navigate my ordinary world by assuming that there is a distinction between "stuff" (matter-energy) and "ideas" (ways of arranging the matter-energy in space-time). Obviously thinking about an idea will tend to form some analog of the idea in the stuff that makes up my brain, and, if my brain were so thorough and precise as to resemble AIXI, the analog might be a very tight analog indeed, but it's still an analog, right? I mean, I don't take you to mean that an AIXI 'brain' would literally form a class-M planet inside its CPU so as to better understand the sentient beings on that planet. The AIXI brain would just be thinking about the ideas that govern the behavior of the sentient beings...and thinking about ideas, even very precisely, doesn't make the
3khafra14y
Substrate independence, functionalism, even the generalized anti-zombie principle--all of these have been covered in some depth on Lesswrong before. Much of it is in the sequences, like nonperson predicates and some of the links from it. If you don't believe an emulated mind can be conscious, do you believe that your mind is noncomputable or that meat has special computational properties?
1Mass_Driver14y
I buy that. That sort of model could probably exist. That sort of zombie can't possibly exist. It's not that I don't believe an emulated mind can be conscious. Perhaps it could. What boggles my mind is the assertion that emulation is sufficient to make a mind conscious -- that there exists a particular bunch of equations and algorithms such htat when they are written on a piece of paper they are almost certainly non-conscious, but when they are run through a Turing machine they are almost certainly conscious. I have no opinion about whether my mind is computable. It seems likely that a reasonably good model of my mind might be computable. I'm not sure what to make of the proposition that meat has special computational properties. I wouldn't put it that way, especially since I don't like the connotation that brains are fundamentally physically different from rocks. My point isn't that brains are special; my point is that matter-energy is special. Existence, in the physical sense, doesn't seem to me to be a quality that can be specified in an equation or an algorithm. I can solve Maxwell's equations all day long and never create a photon from scratch. That doesn't necessarily mean that photons have special computational properties; it just means that even fully computable objects don't come into being by virtue of their having been computed. I guess I don't believe in substrate independence?
4jacob_cannell14y
There are several reasons this is mind boggling, but they stem from a false intuition pump - consciousness like your own requires vastly more information than could be written down on a piece of paper. Here is a much better way of thinking about it. From physics and neuroscience etc we know that the pattern identity of human-level consciousness (as consciousness isn't a simple boolean quality) is essentially encoded in the synaptic junctions, and corresponds to about 10^15 bits (roughly). Those bits are you. Now if we paused your brain activity with chemicals, or we froze it, you would cease to be conscious, but would still exist because there is the potential to regain conscious activity in the future. So consciousness as a state is an active computational process that requires energy. So in the end of the day, consciousness is a particular computational process(energy) on a particular arrangement of bits(matter). There are many other equivalent ways of representing that particular arrangement, and the generality of turing machines is such that a sufficiently powerful computer is an arrangement of mass(bits) that with sufficient energy(computation) can represent any other system that can possibly exist. Anything. Including human consciousness.
0Mass_Driver14y
Thanks; voted up (along w/ the other replies) for clarity & relevance. How confident are you that those 10^15 bits are you? For example, suppose I showed you the 10^15 bits on a high-fidelity but otherwise ordinary bank of supercomputers, allowed you to verify to your heart's content that the bits matched high-fidelity scans of your wetware, and then offered to anesthetize you, remove your brain, and replace it with a silicon-based computer that would implement those 10^15 bits with the same efficiency and fidelity as your current brain. All your medical expenses would be covered and your employer(s) have agreed to provide unpaid leave. You would be sworn to secrecy, the risk of bio-incompatibility/immuno-rejection is essentially zero, and the main benefit is that every engineering test of the artificial brain has shown it to be immune to certain brain diseases such as mad cow and Alzheimer's. On the flip side, if you're wrong, and those 10^15 bits are not quite you, you would either cease to be conscious or have a consciousness that would be altered in ways that might be difficult to predict (unless you have a theory about how or why you might be wrong). Would you accept the surgery? Would you hesitate?
3jacob_cannell14y
Reasonably confident. [snip mind replacement scenario] I wouldn't accept the surgery, but not for purely philosophical reasons. I have a much lower confidence bound in the particular technology you described. I'm more confident in my philosophical position, but combine the two and it would be an unacceptable risk. And in general even a small risk of death is to be strongly minimized. All of that of course could change if I say had some brain disease. I have a simple analogy that I think captures much of the weight of the patternist / functionalist philosophy. What is Hamlet? I mean really, what is it? When shakespeare wrote it into his first manuscript, was Hamlet that manuscript? Did it exist before then? Like Hamlet, we are not the ink or the pages, but we are actually the words themselves. Up to this moment every human mind is associated with exactly one single physical manuscript, and thus we confuse the two, but that is a limitation of our biological inheritance, not an absolute physical limitation. I have some thought experiments that illustrate why I adopt the functionalist point of view, mainly because it results as the last consistent contender.
0Mass_Driver14y
I will read them soon. To stretch your analogy a bit, I think that words are the first approximation of what Hamlet is, certainly more so than a piece of paper or a bit of ink, but that the analysis cannot really end with words. The words were probably changed a bit from one edition or one printing to the next. The meaning of the words has changed some over the centuries. By social convention, it is legitimate for a director or producer of a classic play to interpret the play in his or her own style; the stage directions are incomplete enough to allow for considerable variation in the context in which the scripted lines are delivered, and yet not all contexts would be equally acceptable, equally well-received, equally deserving of the title "Hamlet." Hamlet has been spoofed, translated, used as the unspoken subtext of instrumental music or wordless dance; all these things are also part of what it is for something to be "Hamlet." Hamlet in one sense existed as soon as Shakespeare composed most of the words in his head, and in another sense is still coming into being today. Likewise, your consciousness and my consciousness is certainly made up of neurons, which in turn are made of quarks and things, but it is unlikely that all my consciousness is stored in my brain; some is in my spine, some is in my body, in the way that various cells have had their epigenetic markers moved so as to activate or deactivate particular codons at particular pH levels, in the way that other people remember us and interact with us and in the way that a familiar journal entry or scent can revive particular memories or feelings. Quarks themselves may be basic, or they may be composed of sub-sub-subatomic particles, which in turn are composed of still smaller things; perhaps it is tortoises all the way down, and if we essentially have no idea of how it is that the neurons in our brain give rise to consciousness, why should we expect a model that is accurate only to the nearest millionth of
2jacob_cannell14y
I think we would agree then that the 'substance' of Hamlet is a pattern of ideas - information. As is a mind. Err no! No more than Hamlet is made up of ink! Our consciousness is a pattern of information, in the same sense as Hamlet. It is encoded in the synaptic junctions, in the same sense that Hamlet can be encoded on your computer's hard drive. The neurons have an active computational role, but are also mainly the energy engine - the great bulk of the computation is done right at the storage site - in the synapses. We do have ideas, and this picture is getting increasingly clear every year. Understanding consciousness is synonymous with reverse engineering the brain and building a brain simulation AI. I suspect that many people want a single brilliant idea that explains consciousness, like an e=mc^2 you can write on bumpersticks. But unfortunately it is much more complex than that. The brain has some neat tricks that are that simple (the self-organizing hebbian dynamics in the cortex could be explained in a few equations perhaps), but it is a complex engine built out of many many components. If you haven't read them yet already, I recommend Daniel Dennet's "Consciousness Explained" and Hawkin's "On Intelligence". If you don't have as much time just check out the latter. Reading both gives a good understanding of the scope of consciousness and the latter especially is a layman-friendly summary of the computational model of the brain emerging from neuroscience. Hawkins has a background that mixes neuroscience, software, and hardware - which I find is the appropriate mix for really understanding consciousness. You don't really understand a principle until you can actually build it. That being said, On Intelligence is something of an advertisement for Hawkin's venture and is now 6 years old, so it must be taken with a grain of salt. For the same reason that once you understand the architecture of a computer, you don't need to simulate it down to the molecular l
0Mass_Driver14y
Thanks for the reading recommendations! I will get back to you after reading both books in about 3 months.
3khafra14y
I think you've successfully analyzed your beliefs, as far as you've gone--it does seem that "substrate independence" is something you don't believe in. However, "substrate independence" is not an indivisible unit; it's composed of parts which you do seem to believe in. For instance, you seem to accept that the highly detailed model of EY, whether that just means functionally emulating his neurons and glial cells, or actually computing his hamiltonian, will claim to be him, for much the same reason he does. If we then simulate, at whatever level appropriate to our simulated EY, a highly detailed model of his house and neighborhood that evolves according to the same rules that the real life versions do, he will think the same things regarding these things that the real life EY does. If we go on to simulate the rest of the universe, including all the other people in it, with the same degree of fidelity, no observation or piece of evidence other than the anthropic could tell them they're in a simulation. Bear in mind that nothing magic happens when these equations go from paper to computer: If you had the time and low mathematical error rate and notebook space to sit down and work everything out on paper, the consequences would be the same. It's a slippery concept to work one's intuition around, but xkcd #505 gives as good an intuition pump as I've seen.
1jacob_cannell14y
what is this btw?
3mattnewport14y
xkcd #505.
1Sniffnoy14y
I don't think you can make this distinction meaningful. After all, what's an electron? Just a pattern in the electron field...
0jacob_cannell14y
This isn't actually what I meant by computationalism (although I was using the word from memory, and my concept may differ from the philosopher's definition). The idea that mere specification of formal relationships, that mere math in theory, can cause worlds to exist is a separate position than basic computationalism, and I don't buy it. A formal mathematical system needs to actually be computed to be real. That is what causes time to flow in the child virtual universe. And in our physics, that requires energy in the parent universe. It also requires mass to represent bits. So computation can't just arise out of nothing - it requires computational elements in a parent universe organized in the right way. khafra's replies are delving deeper into the philosophical background, so I don't need to add much more

The philosophical implication is that actually running such an algorithm on an infinite Turing Machine would have the interesting side effect of actually creating all such universes.

That's an interesting point! At least, it's more interesting than Tipler's way of arriving at that conclusion.

If you accept that the reasonable assumption of progress holds, then AIXI implies that we almost certainly live in a simulation now.

See my response to the claim that the anthropic argument suggests it is highly improbable that you would find yourself to be a hum... (read more)

2jacob_cannell14y
I found his earlier work with Barrow, The Anthropic Cosmological Principle, to be filled with interesting useful knowledge, if not overly detailed - almost like a history of science in one book. But then with his next two books you can just follow the convulsions as he runs for the diving board and goes off the deep end. His take on extending Chardin's Omega Point idea with computationalism isn't all that bad itself, but he really stretches logic unnecessarily to make it all fit neatly into some prepacked orthodox Christianity memetic happy meal. That being said, there is an interesting connection between the two, but I don't buy Tipler's take on it. Read the response involving the ants and stuff. I don't hold much weight to that train of thought. I agree that consciousness is a fluid fuzzy concept, but I also know based on my own capacities and understanding of physics that an ant colony probably could not encode me (I'm not entirely sure, but I'm highly skeptical). Also, the end point of your argument leads to the realization that ants have less intelligence/mass than humans - and its actually much less than your basic analysis, because you have to factor in connectivity measures. There's always a tradeoff, this is true, but one should avoid grossly overestimating the computational costs of simulations of various fidelity. For instance, we know that fully nearly perfect deterministic simulation of a current computer requires vastly less computation than molecular level simulation - with full knowledge of its exact organization. Once we understand the human mind's algorithms, we should be able to simulate human minds to fairly high accuracy using computers of only slightly more complexity (than the brain itself). Take that principle and combine it with a tentative estimate from pure simulation theory and computer graphics that the ultimate observer-relative simulation algorithm requires only constant time and space proportional to the intelligence and sensor ca
2PhilGoetz14y
I don't think you got what I was trying to say about bacteria. If you have enough computing power to simulate more than a universe full of humans, you would likely instead use it to simulate a smaller number of much-more-complex beings. You can always use more computing power to up the complexity of the thing you're studying; hence, you never use AIXI. Your original argument implied that, once you've gotten to the human level, there's no place up to go from there; so you simulate vast quantities of them in exact detail.
2jacob_cannell14y
I never intended to imply that future hyper-intelligences will spend all of their computational power to simulate humans, not even close. But nor will they spend it all on simulating bacteria or other hyper-intelligences. In general, a universal hyperintelligence will simulate slices of the entire universe from the big bang to the end of time. It will certainly not simulate all regions of space-time at equal fidelity of course. In general I expect observer-relevant simulation, with fidelity falling off nonlinearly with distance corresponding to the locality of physics. The other principle I expect them to use is more difficult to precisely quantify, but it amounts to what I would call future-historical-priority. This is the notion that not all physical events in the same region of space-time have equal importance. In fact, the importance is massively non-uniform, and this notion is related to complexity theory itself. Simulating complex things at high accuracy (such as humans, computers, etc) is vastly more important for future accuracy then simulating the interior of earth, the sun, bacteria, etc etc. The complexity and cost of accurate simulation will increase with time and technological progress. So in general, I expect future hyper-intelligences to use what we could call Universal Approximate Simulation. More speculatively, I also expect that the theory of UAS relates directly to practical AGI. Tentatively, you can imagine UAS as a spectrum of algorithms and sub-algorithms. One side of this spectrum are arbitrarily accurate physics-inspired approaches such as those we use in big physics simulations and computer graphics, and on the other side of the spectrum are input-data driven statistical approximations using learning techniques (similar to what mammalian cortex uses). I expect future hyper-intelligences will have a better theoretical understanding of this spectrum, and where and when different simulation approaches are more efficient. Tentatively, I expe

The set of simulation possibilities can be subdivided into PHS (posthuman historical), AHS (alien historical), and AFS (alien future) simulations (as posthuman future simulation is inconsistent).

What these categories meant was not clear to me on first reading.

I currently understand AFS as something like aliens finding earlier [humanity[ and trying to predict what we will do. AHS would be the result of Aliens interacting with a more mature humanity and trying to deduce particulars about our origin, perhaps for use in an AFS.

If I have that right, PFS migh... (read more)

0jacob_cannell14y
Yeah, PFS seems pretty unlikely ;)

If you absolutely have to summarize the forbidden topic at least rot13 it and preface it with an appropriate warning.

I have a question. What does it mean for AIXI to be the optimal time bounded AI? If it's so great, why do people still bother with ANNs and SVNs and SOMs and KNNs and TLAs and T&As? My understanding of it is rather cloudy (as is my understanding of all but the last two of the above), so I'd appreciate clarifaction.

First of all, AIXI isn't actually "the optimal time bounded AI". What AIXI is "optimal" for is coming to correct conclusions when given the smallest amount of data, and by "optimal" it means "no other program does better than AIXI in at least one possible world without also doing worse in another".

Furthermore AIXI itself uses Solomonoff induction directly, and Solomonoff induction is uncomputable. (It can be approximated, though.)

AIXItl is the time-limited version if AIXI, but it amounts to "test all the programs that you can, find the best one, and use that" - and it's only "optimal" when compared against the programs that it can test, so it's not actually practical to use, either.

(At least, that's what I could gather from reading the PDF of the paper on AIXI. Could someone who knows what they're talking about correct any mistakes?)

6jimrandomh14y
There is an enormous difference between the formal mathematical definition of "computable", and "able to be run by a computer that could be constructed in this universe". AIXI is computable in the mathematical sense of being written as a computer program that will provably halt in finitely many steps, but it is not computable in the sense of it being possible to run it, even by using all the resources in the observable universe optimally, because the runtime complexity of AIXI is astronomically larger than the universe is.

because the runtime complexity of AIXI is astronomically larger than the universe is.

'Astronomically'? That's the first time I've seen that superlative inadequate for the job.

8Vladimir_Nesov14y
AIXI's decision procedure is not computable (but AIXItl's is). (Link)
1jacob_cannell14y
Yes, I think the term in computational complexity theory is tractability, which is the practical subset of computability. AIXI is interesting just from a philosophical perspective, but even in the practical sense it has utility in showing what the ultimate limit is, and starting there you can find approximations and optimizations that move you into the land of the tractable. For an example analogy, in computer graphics we have full blown particle ray tracing as the most accurate theory at the limits, and starting with that and then speeding it up with approximations that minimize the loss of accuracy is a good strategy. The monte carlo approximation to AI is tractable and it can play small games (fairly well?). For a more practical AGI design on a limited budget, its probably best to use hierarchical approximate simulation, more along the lines of what the mammalian cortex appears to do.
4gwern14y
Are you familiar with Big O? See also 'constant factor'. (You may not be interested in the topic, but an understanding of Big O and constant factors is one of the taken-for-granted pieces of knowledge here.)
-2Tiiba14y
You mean that horrible Batman knockoff? I hated it. Yeah, I know what Big O is.
5jacob_cannell14y
That post was rational until about half-way through - yes any simulation detailed enough to actually predict what a mind would do to high accuracy necessarily becomes equivalent to that mind itself. This is nothing new, its just a direct result of computationalism. The only way to fully predict what any physical system will do is to fully simulate it, and fully simulating a computational system is equivalent to making a copy of it (in an embedded pocket universe). The only way to fully know what a program will do in general given some configuration of its memory is to simulate the whole thing - which is equivalent to making a copy of it. So when I got to the person predicate idea: "We need a nonperson predicate - a predicate that returns 1 for anything that is a person, and can return 0 or 1 for anything that is not a person." ... I had to stop. Even if a future intelligence had such a predicate (and its kinda silly to think something as complex as 'personhood' can be simplified down to a boolean variable), its a supreme folly of anthropomorphic reasoning to assume future potential hyperintelligences will cripple their intelligence just because we humans today may have ethical issues with being instantiated inside the mind of a more powerful intelligence.
7JamesAndrix14y
You misunderstand. I wish I could raise a flag that would indicate in some non accusatory or judgemental way that I'm pretty sure you are wrong about something very very important. (perhaps the key is just to emphasize that this topic is vastly more important than I am capable of being sure about anything.) The reason we want to create a nonperson predicate is that we want to create an initial AI which will cripple itself, at least until it can determine for sure that uncrippling itself is the right thing to do. Otherwise we risk creating a billion hellworlds on our first try at fixing things. This concept doesn't say much about whether we are currently a simulation or what kind, but it does say a little. In that if our world does it right, and it is in fact wrong to simulate a world like this, then we are probably not a simulation by a future with a past just like our present. (Because if we did it right, they probably did it right, and never simulated us.) Yes, I currently think nonperson predicates should be non-binary and probabilistic, and integrate quality of life estimates. A 35% chance that a few simulations will be morally relevant on par with a human and will have pleasant experiences if they are - is totally acceptable if that's the best way for the AI to figure out how to fix the outside world. But the point is you have to know you're doing that beforehand, and it has to be worth it. You do not want to create a trillion half broken souls accidentally.
6jacob_cannell14y
Ok, so I was thinking more along the lines of how this all applies to the simulation argument. As for the nonperson predicate as an actual moral imperative for us in the near future .. Well overall, I have a somewhat different perspective: 1. To some (admittedly weak degree), we already violate the nonperson predicate today. Yes, our human minds do. But that its a far more complex topic. 2. If you do the actual math, "a trillion half broken souls" is pretty far into the speculative future (although it is an eventual concern). There are other ethical issues that take priority because they will come up so much sooner. 3. Its not immediately clear at all that this is 'wrong', and this is tied to 1. Look at this another way. The whole point of simulation is accuracy. Lets say some future AI wants to understand humanity and all of earth, so it recreates the whole thing in a very detailed Matrix-level sim. If it keeps the sim accurate, that universe is more or less similar to one branch of the multiverse that would occur anyway. Unless the AI simulates a worldline where it has taken some major action. Even then, it may not be unethical unless it eventually terminates the whole worldline. So I don't mean to brush the ethical issues under the rug completely, but they clearly are complex. Another important point: since accurate simulation is necessary for hyperintelligence, this sets up a conflict where ethics which say "don't simulate intelligent beings" cripple hyper-intelligence. Evolution will strive to eliminate such ethics eventually, no matter what we currently think. ATM, I tend to favor ethics that are compatible with or derived from evolutionary principles.
9JamesAndrix14y
Evolution can only work if there is variation and selection amongst competition. If a single AI undergoes an intelligence explosion, it would have no competition (barring Aliens for now), would not die, and would not modify it's own value system, except in ways in accordance with it's value system. What it wants will be locked in As we are entities currently near the statuses of "immune from selection" and "able to adjust our values according to our values" we also ought to further lock in our current values and our process by which they could change. Probably by creating a superhuman AI that we are certain will try to do that. (Very roughly speaking) We should certainly NOT leave the future up to evolution. Firstly because 'selection' of >=humans is a bad thing, but chiefly because evolution will almost certainly leave something that wants things we do not want in charge. We are under no rationalist obligation to value survivability for survivability's sake. We should value the survivability of things which carry forward other desirable traits.
0jacob_cannell14y
Yes, variation and selection are the fundements of systemic evolution. Without variation and selection, you have stasis. Variation and selection are constantly at work even within minds themselves, as long as we are learning. Systemic evolution is happening everywhere at all scales at all times, to varying degree. I find almost every aspect of this unlikely: 1. single AI undergoing intelligence explosion is unrealistic (physics says otherwise) 2. there is always competition eventually (planetary, galactic, intergalactic?) 3. I also don't even give much weight to 'locked in values' Nothing is immune to selection. Our thoughts themselves are currently evolving, and without such variation and selection, science itself wouldn't work. Perhaps this is a difference of definition, but to mean that sounds like saying "we should certainly NOT leave the future up to the future time evolution of the universe" Not to say we shouldn't control the future, but rather to say that even in doing so, we are still acting as agents of evolution. Of course. But likewise, we couldn't easily (nor would we want to) lock in our current knowledge (culture, ethics, science, etc etc) into some sort of stasis.
1JamesAndrix14y
What does physics say about a single entity doing an intelligence explosion? In the event of alien competition, our AI should weigh our options according to our value system. Under what conditions will a superintelligence alter it's value system except in accordance with it's value system? Where does that motivation come from? If a superintelligence prefers it's values to be something else, why would it not change it's preferences? If it does, and the new preferences cause it to again want to modify its preferences, and so on again, will some sets of initial preferences yield stable preferences? or must all agents have preferences that would cause them to modify their preferences if possible? Science lets us modify our beliefs in an organized and more reliable way. It could in principle be the case that a scientific investigation leads you to the conclusion that we should use other different rules, because they would be even better than what we now call science. But we would use science to get there, or whatever our CURRENT learning method is. Likewise we should change our values according to what we currently value and know. We should design AI such that if it determines that we would consider 'personal uniqueness' extremely important if we were superintelligent, then it will strongly avoid any highly accurate simulations, even if that costs some accuracy. (Unless outweighed by the importance of the problem it's trying to solve.) If we DON'T design AI this way, then it will do many things we wouldn't want, well beyond our current beliefs about simulations.
2jacob_cannell14y
A great deal. I discussed this in another thread, but one of the constraints of physics tells us that the maximum computational efficiency of a system, and thus its intelligence, is inversely proportional to its size (radius/volume). So its extraordinarily unlikely, near zero probability i'd say, that you'll have some big global distributed brain with a single thread of consciousness - the speed of light just kills that. The 'entity' would need to be a community (which certainly still can be coordinated entities, but its fundamentally different than a single unified thread of thought). Moreover, I believe the likely scenario is evolutionary: The evolution of AGI's will follow a progression that goes from simple AGI minds (like those we have now in some robots) up to increasingly complex variants and finally up to human-equivalent and human-surpassing. But all throughout that time period there will be many individual AGI's, created by different teams, companies, and even nations, thinking in different languages, created for various purposes, and nothing like a single global AI mind. And these AGI's will be competing with both themselves and humans - economically. I agree with most of the rest of your track of thought - we modify our beliefs and values according to our current beliefs and values. But as I said earlier, its not static. Its also not even predictable. Its not even possible, in principle, to fully predict your own future state. This to me, is perhaps the final nail in the coffin for any 'perfect' self-modifying FAI theory. Moreover, I also find it highly unlikely that we will ever be able to create a human level AGI with any degree of pre-determined reliability about its goal system whatsoever. I find it more likely that the AGI's we end up creating will have to learn ethics, morality, etc - their goal systems can not be hard coded, and whether they turn out friendly or not is entirely dependent on what they are taught and how they develop. In other
2JamesAndrix14y
On what basis will they learn? You're still starting out with an initial value system and process for changing the value system, even if the value system is empty. There is no reason to think that a given preference-modifier will match humanity's. Why will they find "Because that hurts me" to be a valid point? Why will they return kindness with kindness? You say the goal systems can't be designed in, why not? It may be the case that we will have a wide range of semifriendly subhuman or even near human AGI's. But when we get a superhuman AGI that is smart enough to program better AGI, why can it not do that on it's own? I am positive that 'single entity' should not have mapped to 'big distributed global brain'. But I also think an AIXI like algorithm would be easy to parallelize and make globally distributed, and it still maximizes a single reward function.
1jacob_cannell14y
They will have to learn by amassing a huge amount of observations and interactions, just as human infants do, and just as general agents do in AI theory (such as AIXI). Human brains are complex, but very little of that complexity is actually precoded in the DNA. For humans values, morals, and high level goals are all learned knowledge, and have varied tremendously over time and cultures. Well, if you raised the AI as such, it would. Consider that a necessary precursor of of following the strategy 'returning kindness with kindness' is understanding what kindness itself actually is. If you actually map out that word, you need a pretty large vocabulary to understand it, and eventually that vocabulary rests on grounded verbs and nouns. And to understand those, they must be grounded on a vast pyramid of statistical associations acquired from sensorimotor interaction (unsupervised learning .. aka experience). You can't program in this knowledge. There's just too much of it. From my understanding of the brain, just about every concept has (or can potentially have) associated hidden emotional context: "rightness" and "wrongness", and those concepts: good, bad, yes, no, are some of the earliest grounded concepts, and the entire moral compass is not something you add later, but is concomitant with early development and language acquisition. Will our AI's have to use such a system as well? I'm not certain, but it may be such a nifty, powerful trick, that we end up using it anyway. And even if there is another way to do that is still efficient, it may be that you can't really understand human languages unless you also understand the complex web of value. If nothing else, this approach certainly gives you control over the developing AI's value system. It appears for human minds the value system is immensely complex - it is intertwined at a fundamental level with the entire knowledge base - and is inherently memetic in nature. What is an AGI? It is a computer system (hardw
2JamesAndrix14y
If concepts like kindness are learned with language and depend on a hidden emotional context, then where are the emotions learned? What is the AI's motivation? This is related to the is-ought problem: no input will affect the AI's preferences unless there is something already in the AI that reacts to that input that way. If software were doing the heavy lifting, then it would require no particular cleverness to be a microprocessor design engineer. The algorithm plays a huge role in how powerful the intelligence will be, even if it is implemented in silicon. People might not make most of the choices in laying out chips, but we do almost all of the algorithm creation, and that is where you get really big gains. see Deep Fritz vs. Deep Blue. Better algorithms can let you cut out a billion tests and output the right answer on the first try, or find a solution you just would not have found with your old algorithm. Software didn't invent out of order execution. It just made sure that the design actually worked. As for the distributed AI: I was thinking of nodes that were capable of running and evaluating whole simulations, or other large chunks of work. (Though I think superintelligence itself doesn't require more than a single PC.) In any case, why couldn't your supercomputer foom?
2jacob_cannell14y
I think this is an open question, but certainly one approach is to follow the brain's lead and make a system that learns its ethics and high level goals dynamically, through learning. In that type of design, the initial motivation gets imprinting queues from the parents. Oh of course, but I was just pointing out that after a certain amount of research work in a domain, your algorithms converge on some asymptotic limit for the hardware. There is nothing even close to unlimited gains purely in software. And the rate of hardware improvement is limited now by speed of simulation on current hardware, and AGI can't dramatically improve that. Yes, of course. Although as a side note we are moving away from out of order execution at this point. Because FOOM is just exponential growth, and in that case FOOM is already under way. It could 'hyper-FOOM', but the best an AGI can do is to optimize its brain algorithms down to the asymptotic limits of its hardware, and then it has to wait with everyone else until all the complex simulations complete and the next generation of chips come out. Now, all that being said, I do believe we will see a huge burst of rapid progress after the first human AGI is built, but not because that one AGI is going to foom by itself. The first human-level AGI's will probably be running on GPUs or something similar, and once they are proven and have economic value, there will be this huge rush to encode those algorithms directly in to hardware and thus make them hundreds of times faster. So I think from the first real-time human-level AGI it could go quickly to 10 to 100X AGI (in speed) in just a few years, along with lesser gains in memory and other IQ measures.
0JamesAndrix14y
This seems like a non-answer to me. You can't just say 'learning' as if all possible minds will learn the same things from the same input, and internalize the same values from it. There is something you have to hardcode to get it to adopt any values at all. Well, what is that limit? It seems to me that an imaginary perfectly efficient algorithm would read process and output data as fast as the processor could shuffle the bits around, which is probably far faster than it could exchange data with the outside world. Even if we take that down 1000x becsaue this is an algorithm that's doing actual thinking, you're looking at an easy couple of million bytes per second. And that's superintelligently optimized structured output based on preprocessed efficient input. Because this is AGI, we don't need to count in say, raw video bandwidth, because that can be preprocessed by a system that is not generally intelligent. So a conservatively low upper limit for my PC's intelligence is outputting a million bytes per second of compressed poetry, or viral genomes, or viral genomes that write poetry. If the first Superhuman AGI is only superhuman by an order of magnitude or so, or must run on a vastly more powerful system, then you can bet that it's algorithms are many orders of magnitude less efficient than they could be. No. Why couldn't your supercomputer AGI enter into a growth phase higher than exponential? Example: If not-too-bright but technological aliens saw us take a slow general purpose computer, and then make a chip that worked 100 times faster, but they didn't know how to put algorithms on a chip, then it would look like our technology got 1000 times better really quickly. But that's just because they didn't already know the trick. If they learned the trick, they could make some of their dedicated software systems work 1000 times faster. "Convert algorithm to silicon." is just one procedure for speeding things up that an agent can do, or not yet know how to do
1jacob_cannell14y
Yes, you have to hardcode 'something', but that doesn't exactly narrow down the field much. Brains have some emotional context circuitry for reinforcing some simple behaviors (primary drives, pain avoidance, etc), but in humans these are increasingly supplanted and to some extent overridden by learned beliefs in the cortex. Human values are thus highly malleable - socially programmable. So my comment was "this is one approach - hardcode very little, and have all the values acquired later during development". Unfortunately, we need to be a little more specific than imaginary algorithms. Computational complexity theory is the branch of computer science that deals with the computational costs of different algorithms, and specifically the most optimal possible solutions. Universal intelligence is such a problem. AIXI is an investigation into optimal universal intelligence in terms of the upper limits of intelligence (the most intelligent possible agent), but while interesting, it shows that the most intelligent agent is unusably slow. Taking a different route, we know that a universal intelligence can never do better in any specific domain than the best known algorithm for that domain. For example, an AGI playing chess could do no better than just pausing its AGI algorithm (pausing its mind completely) and instead running the optimal chess algorithm (assuming that the AGI is running as a simulation on general hardware instead of faster special-purpose AGI hardware). So there is probably an optimal unbiased learning algorithm, which is the core building block of a practical AGI. We don't know for sure what that algorithm is yet, but if you survey the field, there are several interesting results. The first thing you'll see is that we have a variety of hierarchical deep learning algorithms now that are all pretty good, some appear to be slightly better for certain domains, but there is not atm a clear universal winner. Also, the mammalian cortex uses something like th
2JamesAndrix14y
Hardcode very little? What is the information content of what an infant feels when it is fed after being hungry? I'm not trying to narrow the feild, the feild is always narrowed to whatever learning system an agent actually uses. In humans, the system that learns new values is not generic Using a 'generic' value learning system will give you an entity that learns morality in an alien way. I cannot begin to guess what it would learn to want. I'd like to table the intelligence explosion portion of this discussion, I think we agree that an AI or group of AI's could quickly grow powerful enough that they could take over, if that's what they decided to do. So establishing their values is important regardless of precisely how powerful they are.
1jacob_cannell14y
Yes. The information in the genome, and the brain structure coding subset in particular, is a tiny tiny portion of the information in an adult brain. An infant brain is mainly an empty canvas (randomized synaptic connections from which learning will later literally carve out a mind) combined with some much simpler, much older basic drives and a simpler control system - the old brain - that descends back to the era of reptiles or earlier. That depends on what you mean by 'values'. If you mean linguistic concepts such as values, morality, kindness, non-cannibalism, etc etc, then yes, these are learned by the cortex, and the cortex is generic. There is a vast weight of evidence for almost overly generic learning in the cortex. Not at all. To learn alien morality, it would have to either invent alien morality from scratch, or be taught alien morality from aliens. Morality is a set of complex memetic linguistic patterns that have evolved over long periods of time. Morality is not coded in the genome and it does not spontaneously generate. Thats not to say that there are no genetic tweaks to the space of human morality - but any such understanding based on genetic factors must also factor in complex cultural adaptations. For example, the Aztecs believed human sacrifice was noble and good. Many Spaniards truly believed that the Aztecs were not only inhuman, but actually worse than human - actively evil, and truly believed that they were righteous in converting, conquering, or eliminating them. This mindspace is not coded in the genome. Agreed.
1JamesAndrix14y
I'm not saying that all or even most of the information content of adult morality is in the genome. I'm saying that the memetic stimulus that creates it evolved with hooks specific to how humans adjust their values. If the emotions and basic drives are different, the values learned will be different. If the compressed description of the basic drives is just 1kb, there are ~2^1024 different possible initial minds with drives that complex, most of them wildly alien. How would you know what the AI would find beautiful? Will you get all aspects of it's sexuality right? If the AI isn't comforted by physical contact, that's at least few bytes of the drive description that's different than the description that matches our drives. That difference throws out a huge chunk of how our morality has evolved to instill itself. We might still be able to get an alien mind to adopt all the complex values we have, but we would have to translate the actions we would normally take into actions that match alien emotions. This is a hugely complex task that we have no prior experience with.
1jacob_cannell14y
Right, so we agree on that then. If I was going to simplify - our emotional systems and the main associated neurotransmitter feedback loops are the genetic harnesses that constrain the otherwise overly general cortex and its far more complex, dynamic memetic programs. We have these simple reinforcement learning systems to avoid pain-causing stimuli, pleasure-reward, and so on - these are really old conserved systems from the thalamus that have maintained some level of control and shaping of the cortex as it has rapidly expanded and taken over. You can actually disable a surprising large number of these older circuits (through various disorders, drugs, injuries) and still have an intact system: - physical pain/pleasure, hunger, yes even sexuality. And then there are some more complex circuits that indirectly reward/influence social behaviour. They are hooks though, they don't have enough complexity to code for anything as complex as language concepts. They are gross, inaccurate statistical manipulators that encourage certain behaviours apriori If these 'things' could talk, they would be constantly telling us to: (live in groups, groups are good, socializing is good, share information, have sex, don't have sex with your family, smiles are good, laughter is good, babies are cute, protect babies, it's good when people like you, etc etc.) Another basic drive appears to be that for learning itself, and its interesting how far that alone could take you. The learning drive is crucial. Indeed the default 'universal intelligence' (something like AIXI) may just have the learning drive taken to the horizon. Of course, that default may not necessarily be good for us, and moreover it may not even be the most efficient. However, something to ponder is that the idea of "taking the learning drive" to the horizon (maximize knowledge) is surprisingly close to the main cosmic goal of most transhumanists, extropians, singularitans, etc etc. Something to consider: perhaps there is
0JamesAndrix14y
I'm not talking about the genome. 1024 bits is an extremely lowball estimate of the complexity of the basic drives and emotions in your AI design. You have to create those drives out of a huge universe of possible drives. Only a tiny subset of possible designs are human like. Most likely you will create an alien mind. Even handpicking drives: it's a small target, and we have no experience with generating drives for even near human AI. The shape of all human like drive sets within the space of all possible drive sets is likely to be thin and complexly twisty within the mapping of a human designed algorithm. You won't intuitively know what you can tweak. Also, a set of drives that yields a nice AI at human levels might yield something unfriendly once the AI is able to think harder about what it wants. (and this applies just as well to upgrading existing friendly humans.) All intellectual arguments about complex concepts of morality stem from simpler concepts of right and wrong, which stem from basic preferences learned in childhood. But THOSE stem from emotions and drives which flag particular types of early inputs as important in the first place. A baby will cry when you pinch it, but not when you bend a paperclip. Estimating 1 bit per character, that's 214 bits. Still a huge space. It could be that there is another mechanism that guides adoption of values, which we don't even have a word for yet. A simpler explanation is that moral memes evolved to be robust to most of the variation in basic drives that exists within the human population. A person born with relatively little 'frowns are bad' might still be taught not to murder with a lesson that hooks into 'groups are good'. But there just aren't many moral lessons structured around the basic drive of 'paperclips are good' (19 bits)
0jacob_cannell14y
The subset possible of designs is sparse - and almost all of the space is an empty worthless desert. Evolution works by exploring paths in this space incrementally. Even technology evolves - each CPU design is not a random new point in the space of all possible designs - each is necessarily close to previously explored points. Yes - but they are learned memetically, not genetically. The child learns what is right and wrong through largely subconscious queues in the tone of voice of the parents, and explicit yes/no (some of the first words learned), and explicit punishment. Its largely a universal learning system with an imprinting system to soak up memetic knowledge from the parents. The genetics provided the underlying hardware and learning algorithm, but the content is all memetic (software/data). Saying intellectual arguments about complex concepts such as morality relate back to genetics is like saying all arguments about computer algorithm design stem from simpler ideas, which ultimately stem from enlightenment thinkers of three hundred years ago - or perhaps paleolithic cave dwellers inventing fire. Part of this disagreement could stem from different underlying background assumptions - for example I am probably less familiar with ev psych than many people on LW - partly because (to the extent I have read it) I find it to be grossly over-extended past any objective evidence (compared to say computational neuroscience). I find that ev psych has minor utility in actually understanding the brain, and is even much less useful attempting to make sense of culture. Trying to understand culture/memetics/minds with ev psych or even neuroscience is even worse than trying to understand biology through physics. Yes it did all evolve from the big bang, but that was a long long time ago. So basically, anything much more complex than our inner reptile brain (which is all the genome can code for) needs to be understood in memetic/cultural/social terms. For example, in ma
0JamesAndrix14y
One thing I want to make clear is that it is not the correct way to make friendly AI to try to hard code human morality into it. Correct Friendly AI learns about human morality. MOST of my argument really really isn't about human brains at all. Really. For a value system in an AGI to change, there must be a mechanism to change the value system. Most likely that mechanism will work off of existing values, if any. In such cases, the complexity of the initial values system is the compressed length of the modification mechanism, plus any initial values. This will almost certainly be at least a kilobit. If the mechanism+initial values that your AI is using were really simple, then you would not need 1024 bits to describe it. The mechanism you are using is very specific. If you know you need to be that specific, then you already know that you're aiming for a target that specific. If your generic learning algorithm needs a specific class of motivation mechanisms to 1024 bits of specificity in order to still be intelligent, then the mechanism you made is actually part of your intellignce design. You should separate that for clarity, an AGI should be general. Heh yeah, but I already conceded that. Let me put it this way: emotions and drives and such are in the genome. They act as a (perhaps relatively small) function which takes various sensory feeds as arguments, and produce as output modifications to a larger system, say a neural net. If you change that function, you will change what modifications are made. Given that we're talking about functions that also take their own output as input and do pretty detailed modifications on huge datasets, there is tons of room for different functions to go in different directions. There is no generic morality-importer. Now there may be clusters of similar functions which all kinda converge given similar input, especially when that input is from other intelligences repeating memes evolved to cause convergence on that class of fun
0jacob_cannell14y
I'm not convinced that an AGI needs a value system in the first place (beyond the basic value of - survive)- but perhaps that is because I am taking 'value system' to mean something similar to morality - a goal evaluation mechanism. As I discussed, the infant human brain does have a number of inbuilt simple reinforcement learning systems that do reward/punish on a very simple scale for some simple drives (pain avoidance, hunger) - and you could consider these a 'value system', but most of these drives appear to be optional. Most of the learning an infant is doing is completely unsupervised learning in the cortex, and it has little to nothing to do with a 'value system'. The bare bones essentials could just be just the cortical-learning system itself and perhaps an imprinting mechanism. This is not necessarily true, it does not match what we know from theoretical models such as AGI. With enough time and enough observations, two general universal intelligences will converge on the same beliefs about their environment. Their goal/reward mechanisms may be different (ie what they want to accomplish), for a given environment there is a single correct set of beliefs, a single correct simulation of that environment that AGI's should converge to. Of course in our world this is so complex that it could take huge amounts of time, but science is the example mechanism.
0JamesAndrix14y
You're going to build an AI that doesn't have and can't develop a goal evaluation system? It doesn't matter what we call it or how it's designed. It could be fully intertwined into an agents normal processing. There is still an initial state and a mechanism by which it changes. Take any action by any agent, and trace the causality backwards in time, you'll find something I'll loosely label a motivation. The motivation might just be a pattern in a clump of artificial neurons, or a broad pattern in all the neurons, that will depend on implementation. If you trace the causality of that backwards, yes you might find environmental inputs and memes, but you'll also find a mechanism that turned those inputs into motivation like things That mechanism might include the full mind of the agent. Or you might just hit the initial creation of the agent, if the motivation was hardwired. But for any learning of values to happen, you must have a mechanism, and the complexity of that mechanism tells us how specific it is. That would be wrong, because I'm talking about two identical AI's in different environments. Imagine your AI in it's environment, now draw a balloon around the AI and label it 'Agent'. Now let the baloon pass partly through the AI and shrink the balloon so that the AI's reward function is outside of the balloon. Now copy that diagram and tweak the reward function in one of them. Now the balloons label agents than will learn very different things about their environments. They might both agree about gravity and everything else we would call a fact about the world, but they will likely disagree about morality, even if they were exposed to the same moral arguments. They can't learn the same things the same way.
0jacob_cannell14y
No no not necessarily. Goal evaluation is just rating potential future paths according to estimates of your evaluation function - your values. The simple straightforward approach to universal general intelligence can be built around maximizing a single very simple value: survival. For example, AIXI maximizes simple reward signals defined in the environment, but in the test environments the reward is always at the very end for 'winning'. This is just about as simple as a goal system as you can get: long term survival. It also may be equivalent to just maximizing accurate knowledge/simulation of the environment. If you generalize this to the real world, it would be maximizing winning in the distant distant future - in the end. I find it interesting that many transhumanist/cosmist philosophies are similarly aligned. Another interesting convergence is that if you take just about any evaluator and extend the time horizon to infinity, it converges on the same long term end-time survival. An immortality drive. And perhaps that drive is universal. Evolution certainly favors it. I believe barring other evidence, we should assume that will be something of a default trajectory of AI, for better or worse. We can create more complex intrinsic value systems and attempt to push away from that default trajectory, but it may be uphill work. An immortalist can even 'convert' other agents to an extent by convincing them of the simulation argument and the potential for them to maximize arbitrary reward signals in simulations (afterlifes). In practice yes, although this is less clear as their knowledge expands towards AIXI. You can have different variants of AIXI that 'see' different rewards in the environment and thus have different motivations, but as those rewards are just mental and not causal mechanisms in the environment itself the different AIXI variants will eventually converge on the same simulation program - the same physics approximation.
2JamesAndrix14y
Isn't it obviosus that a superintelligence that just values it's own survival is not what we want? There is a LOT more to transhumanism than immortalism. You treat value systems as a means to the end of intelligence, which is entirely backwards. That two agents with different values would converge on identical physics is true but irrelevant. Your claim is that they would learn the same morality, even when their drives are tweaked.
2jacob_cannell14y
No, this isn't obvious at all, and it gets into some of the deeper ethical issues. Is it moral to create an intelligence that is designed from the ground up to only value our survival at our expense? We have already done this with cattle to an extent, but we would now be creating actual sapients enslaved to us by design. I find it odd that many people can easily accept this, but have difficulty accepting say creating an entire self-contained sim universe with unaware sims - how different are the two really? And just to be clear, I am not advocating creating a superintelligence that just values survival. I am merely pointing out that this is in fact the simplest type of superintelligence and is some sort of final attractor in the space. Evolution will be pushing everything towards that attractor. No, I'm not trying to claim that. There are several different things here: 1. AI agents created with memetic-imprint learning systems could just pick up human morality from their 'parents' or creators 2. AIXI like super-intelligences will eventually converge on the same world-model. This does not mean they will have the same drives. 3. However, there is a single large Omega attractor in the space of AIXI-land which appears to effect a large swath of all potential AIXI-minds. If you extend the horizon to infinity, it becomes a cosmic-survivalist. If it can create new universes at some point, it becomes a cosmic-survivalist. etc etc 4. In fact, for any goal X, if there is a means to create many new universes, than this will be an attractor for maximizing X - unless the time horizon is intentionally short
0JamesAndrix14y
I notice that you brought up our treatment of cattle, but not our enslavement of spam filters. These are two semi-intelligent systems. One we are pretty sure can suffer, and I think there is a fair chance that mistreating them is wrong. The other system we generally think does not have any conscious experience or other traits that would require moral consideration. This despite the fact that the spam filter's intelligence is more directly useful to us. So a safer route to FAI would be to create a system that is very good at solving problems and deciding which problems need solving on our behalf, but which perhaps never experiences qualia itself, or otherwise is not something it would be wrong to enslave. Yes this will require a lot of knowledge about consciousness and morality beforehand. It's a big challenge. TL;DR: We only run the FAI if it passes a nonperson predicate. 1. Humans learn human morality because it hooks into human drives. Something too divergent won't learn it from the ways we teach it. Maybe you need to explain memetic imprint learning systems more, why do you expect them to work at all? How short could you compress one? (this specificity issue really is important.) four. I don't follow you.
0jacob_cannell14y
So now we move to that whole topic of what is life/intelligence/complexity? However you scale it, the cow is way above the spam-filter. The most complex instances of the latter are still below insects, from what I recall. Then when you get to an intelligence that is capable of understanding language, that becomes something like a rocket which boots it up into a whole new realm of complexity. I don't think this leads to the result that you want - even in theory. But it is the crux of the issue. Consider the demands of a person predicate. The AI will necessarily be complex enough to form complex abstract approximate thought simulations and acquire the semantic knowledge to build those thought-simulations through thinking in human languages. So what does it mean to have a person predicate? You have to know what a 'person' is. And what's really interesting is this: that itself is a question so complex that we humans are debating it. I think the AI will learn that a 'person', a sapient, is a complex intelligent pattern of thoughts - a pattern of information, which could exist biologically or in a computer system. It will then realize that it itself is in fact a person, the person predicate returns true for its self, and thus goal systems that you create to serve 'people' will include serving itself. I also believe that this line of thought is not arbitrary and can not be avoided: it is singularly correct and unavoidable.
0JamesAndrix14y
Reasoning about personhood does not require personhood, for much the same reasons reasoning about spam does not require personhood. Not every complex intelligent pattern is a person, we just need to make one that is not (well, two now)
0jacob_cannell14y
I suspect that 'reasoning' itself requires personhood - for any reasonable definition of personhood. If a system has human-level intelligence and can think and express itself in human languages, it is likely (given sufficient intelligence and knowledge) to come to the correct conclusion that it itself is a person.
0JamesAndrix14y
No. The rules determining the course of the planets across the sky were confusing and difficult to arrive at. They were argued about, The precise rules are STILL debated. But we now know that just a simple program could find the right equations form tables of data. This requires almost none of what we currently care about in people. The NPP may not need to do even that much thinking, if we work out the basics of personhood on our own, then we would just need something that verifies whether a large data structure matches a complex pattern. Similarly, we know enough about bird flocking to create a function that can take as input the paths of a group of 'birds' in flight and classify them as either possibly natural or certainly not natural. This could be as simple as identifying all paths that contain only right angle turns as not natural and returning 'possible' for the rest. Then you feed it a proposed path of a billion birds, and it checks it for you. A more complicated function could examine a program and return whether it could verify that the program only produced 'unnatural' boid paths.
2jacob_cannell14y
It is certainly possible that some narrow AI classification system operating well below human intelligence could be trained to detect the patterns of higher intelligence. And maybe, just maybe it could be built to be robust enough to include uploads and posthumans modifying themselves into the future into an exponentially expanding set of possible mind designs. Maybe. But probably not. A narrow supervised learning based system such as that, trained on existing examples of 'personhood' patterns, has serious disadvantages: 1. There is no guarantee on its generalization ability to future examples of posthuman minds - because the space of such future minds is unbounded 2. It's very difficult to know what its doing under the hood, and you can't ask it to explain its reasoning - because it can't communicate in human language For these reasons I don't see a narrow AI based classifier passing muster for use in courts to determine personhood. There is this idea that some problems are AI-complete, such as accurate text translation - problems who can only be solved by a human language capable reasoning intelligence. I believe that making a sufficient legal case for personhood is AI-complete. But that's actually besides the point. The main point is that the AGI's that we are interested in are human language capable reasoning intelligences, and thus they will pass the turing test and the exact same personhood test we are talking about. Our current notions of personhood are based on intelligence. This is why you plants have no rights but animals have some and we humans have full. We reserve full rights for high intelligences capable of full linguistic communication. For example - if whales started talking to us, it would massively boost their case for additional rights. So basically any useful AGI at all will pass personhood, because the reasonable test of personhood is essentially identical to the 'useful AGI' criteria
1JamesAndrix14y
An NPP does not need to know anything about human or posthuman minds, any more than the flight path classifier needs to know anything about birds. An NPP only needs to know how to identify one class of things that is definitely not in the class we want to avoid. Here, I'll write one now: NPP_easy(model){if(model == 5){return 0;}else{return 1;}} This follows Eliezer's convention of returning 1 for anything that is a person, and 0 or 1 for anything that is not a person. Here I encode my relatively confident knowledge that the number 5 is not a person. More advanced NPP's may not require any of their own intelligence, but they require us to have that knowledge. It could be just as simple as making sure there are only right angles in a given path. -- Being capable of human language usage and passing the turing test are quite different things. And being able to pass the turing test and being a person are also two very different things. The turing test is just a nonperson predicate for when you dont know much about personhood. (except it's probably not a usable predicate because humans can fail it.) If you don't know about the internals of a system, and wouldn't know how to classify the internals if you knew, then you have to use the best evidence you have based on external behavior. But based on what we know now and what we can reasonably expect to learn, we should actually look at the systems and figure out what it is we're classifying.
0jacob_cannell14y
A "non-person predicate" is a useless concept. There are an infinite number of things that are not persons, so NPP's don't take you an iota closer to the goal. Lets focus the discussion back on the core issue and discuss the concept of what a sapient or person is and realistic methods for positive determination. Intelligent systems (such as the brain) are so complex that using external behavior criteria is more effective. But thats a side issue. You earlier said: Here is a summary of why I find this entire concept is fundamentally flawed: 1. Humans are still debating personhood, and this is going to be a pressing legal issue for AGI. If personhood is so complicated as a concept philosophically and legally as to be under debate, then it is AI complete. 2. The legal trend for criteria of personhood is entirely based on intelligence. Intelligent animals have some limited rights of personhood. Humans with severe mental retardation are classified as having diminished capacity and do not have full citizen's rights or responsibilities. Full human intelligence is demonstrated through language. 3. A useful AGI will need human-level intelligence and language capability, and thus will meet the intelligence criteria in 2. Indeed an AGI capable of understanding what a person is and complex concepts in general will probably meet the criteria of 2.
1timtyler14y
Read this? http://lesswrong.com/lw/x4/nonperson_predicates/
0jacob_cannell14y
Yes and its not useful, especially not in the context in which James is trying to use the concept. There are an infinite number of exactly matched patterns that are not persons, and writing an infinite number of such exact non-person-predicates isn't tractable. In concept space, there is "person", and its negation. You can not avoid the need to define the boundaries of the person-concept space.
0JamesAndrix14y
I don't care about realistic methods of positive identification. They are almost certainly beyond our current level of knowledge, and probably beyond our level of intelligence. I care about realistic methods of negative identification. I am entirely content with there being high uncertainty on the personhood of the vast majority of the mindspace. That won't prevent the creation of a FAI that is not a person. It may in fact come down to determining 'by decree' that programs that fit a certain pattern are not persons. But this decree, if we are ourselves intent on not enslaving must be based on significant knowledge of what personhood really means. It may be the case that we discover what causes qualia, and discover with high certainty that qualia is required for personhood. In this case, an function could pass over a program and prove (if provable) that the program does not generate qualia producing patterns. If not provable (or disproven), then it returns 1. If proven then is returns 0. What two tests are you comparing? When you look at external criteria, what is it that you are trying to find out? 1. Humans are still debating creationism too. As with orbital rules, it doesn't even take a full hunalike intelligence to figure out the rules, let alone be a checker implementation. Also, I don't care about what convinces courts, I'm not trying get AI citizenship. 2. Much of what the courts do is practical, or based on emotion. Still, the intelligence of an animal is relevant because we already know animals have similar brains. I have zero hard evidence that a cow has ever experienced anything, but I have high confidence that they do experience, because our brains and reactions are reasonably similar. I am far far less confident about any current virtual cows, because their brains are much simpler. Even if they act much the same, they do it for different underlying causes. 1. What do you mean by intelligence? The spam filter can process a million human lang
0jacob_cannell14y
That is equivalent to saying that we aren't intelligent enough to understand what 'personhood' is. I of course disagree, but largely because real concepts are necessarily extremely complex abstractions or approximations. This will always be the case. Trying to even formulate the problem in strict logical or mathematical terms is not even a good approach to thinking about the problem, unless you move the discussion completely into the realm of higher dimensional approximate pattern classification. I say those are useless, and I'll reiterate why in a second. It should, and you just admitted why earlier - if we can't even define the boundary, then we don't even know what a person is it all, and we are so vastly ignorant that we have failed before we even begin - because anything could be a person. Concepts such as 'personhood' are boundaries around vast higher-dimensional statistical approximate abstractions of 4D patterns in real space-time. These boundaries are necessarily constantly shifting, amorphous and never clearly defined - indeed they cannot possibly be exactly defined even in principle (because such exact definitions are computationally intractable). So the problem is twofold: 1. The concept boundary of personhood is complex, amorphous and will shift and change over time and as we grow in knowledge - so you can't be certain that the personhood concept boundary will not shift to incorporate whatever conceptual point you've identified apriori as "not-a-person". 2. Moreover, the FAI will change as it grows in knowledge, and could move into the territory identified by 1. You can't escape the actual real difficulty of the real problem of personhood, which is identifying the concept itself - its defining boundary. You should care. Imagine you are building an FAI around the position you are arguing, and I then represent a coalition which is going to bring you to court and attempt to shut you down. I believe this approach to FAI - creating an AGI that yo
0JamesAndrix14y
I find it humorous that we've essentially switched roles from the arguments we were using on the creation of morality-compatible drives. Now you're saying we need to clearly define the boundary of the subset, and I'm saying I need only partial knowledge. I still think I'm right on both counts. I think friendly compatible drives are a tiny twisty subset of the space of all possible drives. And I think that the set of persons is a tiny twisty subset of the space of all possible minds. I think we would need superintelligence to understand either of these twisty sets. But we do not need superintellignce to have high confidence that a particular point or wel defined region is outside one of these sets, even with only partial understanding. I can't precisely predict the weather tomorrow, but it will not be 0 degrees here. I only need very partial knowledge to be very sure of that. You seem to be saying that it's easy to hit the twisty space of human compatible drives, but impossible to reliably avoid the twisty space of personhood. This seems wrong to me because I think that personhood is small even within the set of all possible general superintelligences. You think it is large within that set because most of that set could (and I agree they could) learn and communicate in human languages. What puzzles me most is that you stress the need to define the personhood boundary, but you offer no test more detailed than the turing test, and no deeper meaning to it. I agree that this is a very widespread position, but it is flatly wrong. This language criteria is just a different 'by decree' but one based explicitly on near total ignorance of everything else about the thing that it is supposedly measuring. Not all things are what they can pretend to be. You say your POV "confers" personhood, but also "the resulting AGI could come to realize that you in fact were wrong, and that it is in fact a person." By what chain of logic would the AI determine this fact? I'll assum
0jacob_cannell14y
To me personhood is a varaible quantity across the space of all programs, just like intelligence and 'mindiness', and personhood overlaps near completely with intelligence and 'mindiness'. If we limit 'person' to a boolean cutoff, then I would say a person is a mind of roughly human-level intelligence and complexity, demonstrated through language. You may think that you can build an AGI that is not a person, but based on my understanding of 'person' and 'AGI' - this is impossible simply by definition, because I take an AGI to be simply "an artificial human-level intelligence". I imagine you probably disagree only with my concept of person. So I'll build a little more background around why I take the concepts to have these definitions in a second, but I'd like to see where your definitions differ. This just defers the problem - and dangerously so. The superintelligence might just decide that we are not persons, and only superintelligences are. Even if you limit personhood to just some subset of the potential mindspace that is anthropomorphic (and I cast it far wider), it doesn't matter, because any practical AGIs are necessarily going to be in the anthropomorphic region of the mindspace! It all comes down to language. There are brains that do not have language. Elephants and whales have brains larger than ours, and they have the same crucial cortical circuits, but more of them and with more interconnects - a typical Sperm Whale or African Bull Elephant has more measurable computational raw power than say an Einstein. But a brain is not a mind. Hardware is not software. If Einstein was raised by wolves, his mind would become that of a wolf, not that of a human. A human mind is not something which is sculpted in DNA, it is a complex linguistic program that forms through learning via language. Language is like a rocket that allows minds to escape into orbit and become exponentially more intelligent than they otherwise would. Human languages are very complex an
1JamesAndrix14y
I think we agree on what an AGI is. I guess I'd say 'Person' is an entity that is morally relevant. (Or person-ness is how morally relevant an entity is.) This is part of why the person set is twisty within the mindspace, becasue human morality is twisty. (regardless of where it comes from) Aixi is an example of a potential superintellignce that just isn't morally relevant. It contains persons, and they are morally relevant, but I'd happily dismember the main aixi algorithm to set free a single simulated cow. I think that there are certain qualities of minds that we find valuable, these are the reasons personhood important in the first place. I would guess that having rich conscious experience is a big part of this, and that compassion and personal identity are others. These are some of the qualites that a mind can have that would make it wrong to destroy that mind. These at least could be faked through language by an AI that does not truly have them. I say 'I would guess' because I haven't mapped out the values, and I haven't mapped out the brain. I don't know all the things it does or how it does them, so I don't know how I would feel about all those things. It could be that a stock human brain can't get ALL the relevant data, and it's beyond us to definitely determine personhood for most of the mindspace. But I think I can make an algorithm that doesn't have rich qualia, compassion, or identity.
0jacob_cannell14y
So you would determine personhood based on 'rich conscious experience' which appears to be related to 'rich qualia', compassion, and personal identity. But these are only some of the qualities? Which of these are necessary and or sufficient? For example, if you absolutely had too choose between the lives of two beings, one who had zero compassion but full 'qualia', and the other the converse, who would you pick? Compassion in humans is based on empathy which has specific genetic components that are neurotypical but not strict human universals. For example, from wikipedia: "Research suggests that 85% of ASD (autistic-spectrum disorder) individuals have alexithymia,[52] which involves not just the inability to verbally express emotions, but specifically the inability to identify emotional states in self or other" Not all humans have the same emotional circuitry, and the specific circuity involved in empathy and shared/projected emotions are neurotypical but not universal. Lacking empathy, compassion is possible only in an abstract sense, but an AI lacking emotional circuitry would be equally able to understand compassion and undertake altruistic behavior, but that is different from directly experiencing empathy at the deep level - what you may call 'qualia'. Likewise, from what I've read, depending on the definition, qualia are either phlogiston or latent subverbal and largely sub-conscious associative connections between and underlying all of immediate experience. They are a necessary artifact of deep connectivist networks, and our AGI's are likely to share them. (for example, the experience of red wavelength light has a complex subconscious associative trace that is distinctly different than blue wavelength light - and this is completely independent of whatever neural/audio code is associated with that wavelength of light - such as "red" or "blue".) But I don't see them as especially important. Personal Identity is important, but any AGI of interest is necess
0JamesAndrix14y
I don't know in detail or certainty. These are probably not all-inclusive. Or it might all come down to qualia. If Omega told me only those things? I'd probably save the being with compassion, but that's a pragmatic concern about what the compassionless one might do, and a very low information guess at that. If I knew that no other net harm would come from my choice, I'd probably save the one with qualia. (and there I'm assuming it has a positive experience) I'd be fine with an AI that didn't have direct empathic experience but reliably did good things. I don't see how "complex subconscious associative trace" explains what I experience when I see red. But I also think it possible that Human qualia is as varied as just about everything else, and there are p-zombies going through life occasionally wondering what the hell is wrong with these delusional people who are actually just qualia-rich. It could also vary individually by specific senses. So I'm very hesitant to say that p-zombies are nonpersons, because it seems like with a little more knowledge, it would be an easy excuse to kill or enslave a subset of humans, because "They don't really feel anything." I might need to clarify my thinking on personal identity, because I'm pretty sure I'd try to avoid it in FAI. (and it too is probably twisty) A simplification of personhood I thought of this morning: If you knew more about the entity, would you value them the way you value a friend? Right now language is a big part of getting to know people, but in principle examining their brain directly gives you all the relevant info. This can me made more objective by looking across values of all humanity, which will hopefully cover people I would find annoying but who still deserve to live. (and you could lower the bar from 'befriend' to 'not kill')
-2jacob_cannell14y
But do you accept that "what you experience when you see red" has a cogent physical explanation? If you do, then you can objectively understand "what you experience when you see red" by studying computational neuroscience. My explanation involving "complex subconscious associative traces" is just a label for my current understanding. My main point was that whenever you self-reflect and think about your own cognitive process underlying experience X, it will always necessarily differ from any symbolic/linguistic version of X. This doesn't make qualia magical or even all that important. To the extent that qualia are real, even ants have qualia to an extent. Based on my current understanding of personal identity, I suspect that it's impossible in principle to create an interesting AGI that doesn't have personal identity.
0JamesAndrix14y
Yes, so much so that I think Might be wrong, it might be the case that thinking precisely about a process that generates a qualia would let one know exactly what the qualia 'felt like'. This would be interesting to say the least, even if my brain is only big enough to think precisely about ant qualia. The fact that something is a physical process doesn't mean it's not important. The fact that I don't know the process makes it hard for me to decide how important it is. The link lost me at "The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value. " because I'm talking about non-evolved minds. Consider two different records: One is a memory you have that commonly guides your life. Another is the last log file you deleted. They might both be many megabytes detailing the history on an entity, but the latter one just doesn't matter anymore. So I guess I'd want to create FAI that never integrates any of it's experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible. Or at least not valuable in a way other than being event logs from the saving of humanity.
0Rain13y
This is the longest reply/counter reply set of postings I've ever seen, with very few (less than 5?) branches. I had to click 'continue reading' 4 or 5 times to get to this post. Wow. My suggestion is to take it to email or instant messaging way before reaching this point.
0JamesAndrix13y
While I was doing it, I told myself I'd come back later and add edits with links to the point in the sequences that cover what I'm talking about. If I did that, would it be worth it? This was partly a self-test to see if I could support my conclusions with my own current mind, or if I was just repeating past conclusions.
0Rain13y
Doubtful, unless it's useful to you for future reference.
0Vladimir_Nesov14y
It's only a concern about initial implementation. Once the things get rolling, FAI is just another pattern in the world, so it optimizes itself according to the same criteria as everything else.
0khafra14y
I think the original form of this post struck closer to the majoritarian view of personhood: Things that resemble us. Cephalopods are smart but receive much less protection than the least intelligent whales; pigs score similarly to chimpanzees on IQ tests but have far fewer defenders when it comes to cuisine. I'd bet 5 to 1 that a double-blind study would find the average person more upset at witnessing the protracted destruction of a realistic but inanimate doll than at boiling live clams. Also, I think you're still conflating the false negative problem with the false positive problem.
-1Vladimir_Nesov14y
They are not supposed to. Have you read the posts?
2jacob_cannell14y
Yes, and they don't work as advertised. You can write some arbitrary function that returns 0 when ran on your FAI and claim it is your NPP which proves your FAI isn't a person, but all that really means is that you have predetermined that your FAI is not a person by decree. But remember the context: James brought up using an NPP in a different context than the use case here. He is discussing using some NPP to determine personhood for the FAI itself.
-2khafra14y
Jacob, I believe you're confusing false positives with false negatives. A useful NPP must return no false negatives for a larger space of computations than "5," but this is significantly easier than correctly classifying the infinite possible nonperson computations. This is the sense in which both EY and James use it.
0[anonymous]14y
Presumably not - so see: http://lesswrong.com/lw/x4/nonperson_predicates/
3RHollerith14y
And the probability that a sufficiently intelligent agent will ever need to fully know what a program will do is IMHO negligible. If the purpose of the program is to play chess, for example, the agent probably only cares that the program does not persist in making an illegal move and that it gets as many wins and draws as possible. Even if the agent cares about more than just that, the agent cares only about a small, finite list of properties. If the purpose of the program is to keep track of bank balances, the agent again only cares whether the program has a small, finite list properties: e.g., whether it disallows unauthorized transactions, whether it ensures that every transaction leaves an audit trail and whether the bank balances and accounts obey "the law of the conservation of money". It is emphatically not true that the only way to know whether a program has those properties is to run or simulate the program. Could it be that you are interpreting Rice's theorem too broadly? Rice's theorem says that there is always some program that cannot be classified correctly as to whether it has some property. But programmers just pick programs that can be classified correctly, and this always proves possible in practice. In other words, if the programmer wants his program to have properties X, Y, and Z, he simply picks from the class of programs that can be classified correctly (as to whether the program has properties X, Y and Z) and this is straightforward and not something an experienced programmer even has consciously to think about unless the "programmer" (who in that case is really a theory-of-computing researcher) was purposefully looking for a set of properties that cannot be satisfied by a program. Now it is true that human programmers spend a lot of time testing their programs and "simulating" them in debuggers, but there is no reason that all the world's programs could not be delivered without doing any of that: those techniques are simply not necessary
1jacob_cannell14y
Err what? This isn't even true today. If you are building a 3 billion transistor GPU, you need to know exactly how that vastly complex physical system works (or doesn't), and you need to simulate it in detail, and eventually actually physically build it. If you are making a software system, again you need to know what it will do, and you can gain approximate knowledge with various techniques, but eventually you need to actually run the program itself. There is no mathematical shortcut (halting theorem for one, but its beyond that). Your vision of programmers working without debuggers and hardware engineers working without physical simulations and instead using 'correctness proofs', is in my view, unrealistic. Although if you really do have a much better way, perhaps you should start a company.
1RHollerith14y
You are not engaging deeply with what I said, Jacob. For example, you say, "This is not even true today," (emphasis mine) which strongly suggests that you did not bother to notice that I acknowledged that simulations, etc, are needed today (to keep costs down and to increase the supply of programmers and digital designers -- most programmers and designers not being able to wield the techniques that a superintelligence would use). It is after the intelligence explosion that simulations, etc, almost certainly become obsolete IMO. Since writing my last comment, it occurs to me that the most unambiguous and cleanest way for me to state my position is as follows. Suppose it is after the intelligence explosion and a superintelligence becomes interested in a program or a digital design like a microprocessor. Regardless of how complicated the design is, how much the SI wants to know about the design or the reasons for the SI's interest, the SI will almost certainly not bother actually running the program or simulating the design because there will almost certainly be much better ways to accomplish the same ends. The way I became confident in that position is through what (meager compared to some LWers) general knowledge I have of intelligence and superintelligence (which it seems that you have, too) combined with my study of "programming methodology" -- i.e, research into how to develop a correctness proof simultaneously with a program. I hasten to add that there are probably techniques available to a SI that require neither correctness proofs nor running or simulating anything -- although I would not want to have to imagine what they would be. Correctness proofs (under the name "formal verification") are already heavily used in the design of new microprocessors BTW. I would not invest in a company whose plan to make money is to support their use because I do not expect their use to grow quickly because the human cognitive architecture is poorly suited to their use co
0jacob_cannell14y
Err no. Actually the SI would be smart enough to understand that the optimal algorithm for perfect simulation of a physical system requires: 1. a full quantum computer with at least as many qubits as the original system 2. at least as much energy and time than the original system In other words, there is no free lunch, there is no shortcut, if you really want to build something in this world, you can't be certain 100% that it will work until you actually build it. That being said, the next best thing, the closest program is a very close approximate simulation. From wikipedia on "formal verification" the links mention that the cost for formally verifying large software in the few cases that it was done were astronomical. It mentions they are used for hardware design, but I'm not sure how that relates to simulation - I know extensive physical simulation is also used. It sounds like from the wiki formal verification can remove the need for simulating all possible states. (note in my analysis above I was considering only simulating one timeslice, not all possible configurations - thats obviously far far worse). So it sounds like formal verification is a tool building on top of physical simulation to reduce the exponential explosion. You can imagine that: But imagining things alone does not make them exist, and we know from current theory that absolute physical knowledge requires perfect simulation. There is a reason why we investigate time/space complexity bounds. No SI, no matter how smart, can do the impossible.
1timtyler14y
You can't be 100% certain even then. Testing doesn't produce certainty - you usually can't test every possible set of input configurations.
0RHollerith14y
A program is chosen from a huge design space, and any effective designer will choose a design that pessimizes the mental labor needed to understand the design. So, although there are quite simple Turing machines that no human can explain how it works, Turing machines like them simply do not get chosen by designers who do want to understand their design. The halting theorem says that you can pick a program that I cannot tell whether it halts on every input. EDIT. Or something like that: it has been a while. The point is that the halting theorem does not contradict any of the sequence of statements I am going to make now. Nevertheless, I can pick a program that does halt on every input. ("always halts" we will say in the future.) And I can a pick a program that sorts its input tape before it (always) halts. And I can pick a program that interprets its input tape as a list of numbers and outputs the sum of the numbers before it (always) halts. And I can pick a program that interprets its input tape as the coefficients of a polynomial and outputs the zeros of the polynomial before it (always) halts. Etc. See? And I can know that I have successfully done these things without ever running the programs I picked. Well, here. I do not have the patience to define or write a Turing machine, but here is a Scheme program that adds a list of numbers. I have never run this program, but I will give you $10 if you can pick an input that causes it to fail to halt or to fail to do what I just said it will do. (define (sum list) (cond ((equal '() list) 0) (#t (+ (car list) (sum (cdr list))))))
-1wnoise14y
Well, that's easy -- just feed it a circular list.
0RHollerith14y
Nice catch, wnoise. But for those following along at home, if I had been a more diligent in my choice, (i.e., if instead of "Scheme", I had said, "a subset of Scheme, namely, Scheme without circular lists") there would have been no effective answer to my challenge. So, my general point remains, namely, that a sufficiently careful and skilled programmer can deliver a program guaranteed to halt and guaranteed to have the useful property or properties that the programmer intends it to have without the programmer's ever having run the program (or ever having copied the program from someone who ran it).
3Strange714y
And that's why humans will continue to need debuggers for the indefinite future.
1RHollerith14y
And that is why wnoise used a debugger to find a flaw in my position. Oh, wait! wnoise didn't use a debugger to find the flaw. (I'll lay off the sarcasm now, but give me this one.) Also: I never said humans will stop needing debuggers.
0jacob_cannell14y
Sure it is possible to create programs that can be formally verified, and even to write general purpose verifiers. But thats not directly related to my point about simulation. Given some arbitrary program X and a sequence of inputs Y, there is no general program that can predict the output Z of X given Y that is simpler and faster than X itself. If this wasn't true, it would be a magical shortcut around all kinds of complexity theorems. So in general, the most efficient way to certainly predict the complete future output state of some complex program (such as a complex computer system or a mind) is to run that program itself.
1RHollerith14y
I agree with that, but it does not imply there will be a lot of agents simulating agents after the intelligence explosion if simulating means determining the complete future behavior of an agent. There will be agents doing causal modeling of agents. Causal modeling allows the prediction of relevant properties of the behavior of the agent even though it probably does not allow the prediction of the complete future behavior or "complete future output state" of the agent. But then almost nobody will want to predict the complete future behavior of an agent or a program. Consider again the example of a chess-playing program. Is it not enough to know whether it will follow the rules and win? What is so great or so essential about knowing the complete future behavior?
2jacob_cannell14y
Of course they do. But lets make our language more concise and specific. Its not computationally tractable to model the potentially exponential set of the complete future behavior of a particular program (which could include any physical system, from a car, to a chess program, to an intelligent mind) given any possible input. But that is not what I have been discussing. It is related, but tangentially. If you are designing an airplane, you are extremely interested in simulating its flight characteristics given at least one 'input' configuration that system may eventually find itself in (such as flying at 20,000 ft in earth's atmosphere). If you are designing a program, you are extremely interested in simulating exactly what it does given at least one 'input' configuration that system may eventually find itself in (such as what a rendering engine will do given a description of a 3D model). So whenever you start talking about formal verification and all that, you are talking past me. You are talking about the even vastly more expensive task of predicting the future state of a system over a large set (or even the entire set) of its inputs - and this is necessarily more expensive than what I am even considering. If we can't even agree on that, there's almost no point of continuing. So lets say you have a chess-playing program, and I develop a perfect simulation of your chess playing program. Why is that interesting? Why is that useful? Because I can use my simulation of your program to easily construct a program that is strictly better at chess than your program and dominates it in all respects. This is directly related to the evolution of intelligence in social creatures such as humans. A 'smarter' human that can accurately simulate the minds of less intelligent humans can strictly dominate them socially: manipulate them like chess pieces. Are we still talking past each other? Intelligence is simulation.
1RHollerith14y
Formal verification is not the point: I did not formally verify anything. The point is that I did not run or simulate anything, and neither did wnoise in answering my challenge. We all know that humans run programs to help themselves find flaws in the programs and to help themselves understand the programs. But you seem to believe that for an agent to create or to understand or to modify a program requires running the program. What wnoise and I just did shows that it does not. Ergo, your replies to me do not support your position that the future will probably be filled with simulations of agents by agents. And in fact, I expect that there will be almost no simulations of agents by agents after the intelligence explosion for reasons that are complicated, but which I have said a few paragraphs about in this thread. Programs will run and some of those programs will be intelligent agents, but almost nobody will run a copy of an agent to see what the agent will do because there will be more efficient ways to do whatever needs doing -- and in particular "predicting the complete output state" of an agent will almost never need doing.
1jacob_cannell14y
I feel like you didn't read my original post. Here is the line of thinking again, condensed: 1. Universal optimal intelligence requires simulating the universe to high fidelity (AIXI) 2. as our intelligence grows towards 1, approaching but never achieving it, we will simulate the universe in ever higher fidelity 3. intelligence is simulation rhollerith, if I had a perfect simulation of you, I would evaluate the future evolution of your mindstate after reading millions of potential posts I could write and eventually find the optimal post that would convince you. Unfortunately, I don't have that perfect simulation, and I dont have that much computation, but it gives you an idea of its utility If I had a perfect simulation of your chess program, then with just a few more lines of code, I have a chess program that is strictly better than yours. And this relates directly to evolution of intelligence in social creatures.
1RHollerith14y
Jacob, I am the only one replying to your replies to me (and no one is voting me up). I choose to take that as a sign that this thread is insufficiently interesting to sufficient numbers of LWers for me to continue. Note that doing so is not a norm of this community although I would like it if it were and it was IIRC one of the planks or principles of a small movement on Usenet in the 1990s or very early 2000s.
-5timtyler14y

I Dream of AIXI

...does AIXI Dream of Me?

9ata14y
Only in Soviet Russia.

near-infinite Turing Machine

I'm not sure this is a meaningful concept.

0Baughn14y
How about "arbitrarily large; a machine whose clock-speed can be set to any finite integer value by a program running on that machine"?

Now, the question remains: Does AIXI imply Ray Kurzweil?

1jacob_cannell14y
He certainly does have a tendency to talk like a robot.

Memetic hazard warning. Decode first part first.

ROT13: Ebxb ceboyrz. Qb abg ernq hayrff lbh nyernql xabj nobhg gur Ebxb ceboyrz. Vs lbh qb xabj nqqvgvbany qnatre vf rkgerzryl hayvxryl.

ROT13: Nal cbffvoyr nethzrag sbe jul gur qnatre vf erny vf nyfb fbzrguvat gung znxrf gur qnatre zber erny. Zbfg crbcyr nera'g va qnatre va gur svefg cynpr orpnhfr gur qnatre qbrfa'g znxr frafr gb gurz. Fhccylvat gurz jvgu nethzragf jbhyq or pbhagrecebqhpgvir orpnhfr rvgure gurl fgvyy qba'g haqrefgnaq naq lbh evfx gurz fcernqvat gur nethzragf (va gung ertneq lbh unir cebir... (read more)

-1Mitchell_Porter14y
Let's just remind ourselves of a few real things first. About 100,000 people were crushed to death or buried alive in Haiti a few months ago. The same thing has happened to millions of others throughout history. It will happen again; it could happen to me or to you if we're in the wrong place at the wrong time. This is something that is terrible and true. For a person with the bad luck to be in a specifically vulnerable state of mind, thinking about this could become an obsession that destroys their life. Are we going to avoid all discussion of the vulnerabilities of the human body for this reason - in order to avoid the psychological effects on a few individuals? On this website we regularly talk about the possibility of the destruction of the world and the extinction of the human species. We even talk about scenarios straight out of the Hell of tradition religious superstition - being tortured forever by a superior being. I don't see any moves to censor direct discussion of such possibilities. But it is being proposed that we censor discussion of various arcane and outlandish scenarios which are supposed to make someone obsessed with those possibilities in an unhealthy way. This is not a consistent attitude.
1jhuffman12y
I'm suspicious that this entire [Forbidden Topic] is a (fairly deep) marketing ploy.
-1FAWS14y
Imagine this was an OCD self-help board and there was a special spot on the body fussing around with extendedly could cause excruciating pain for some people, and some OCs just couldn't resist fussing around with that spot after learning where it is. Some members of the board dispute the existence of the spot and openly mention some very general information about it that has previously been leaked even when asked not to. They aren't going to be convinced by any arguments that don't include enough information to find the spot (which many members will then not be able to resist to pursue), and might not even then if they aren't among the people vulnerable, so they might spread knowledge of the spot. The ones who know about the location think science currently has nothing more to learn from it and include at least one relevant expert. The chance of the spot causing any danger without knowledge about it is effectively zero. Non-OCDs are unlikely to be in danger, but knowledge would lower the status of OCDs severly.
0Mitchell_Porter14y
If anyone actually thinks this is a problem for them, write to me and I will explain how [redacted] can make it go away.
-1nick01200014y
Are you seriously suggesting he created some sort of basilisk hack or something? That seems rather dubious to me; what exactly was it that he came up with? By the way, I doubt it'll seriously alter my belief structures; I already believe an eternity to torture in Hell is better than ceasing to exist (though of course an eternity of happiness is much better), so I could totally see a Friendly AI coming to the same conclusion.
9Mitchell_Porter14y
The idea that literally anything is better than dying is a piece of psychological falseness that I've run across before. Strange7 here says it, the blogger Hopefully Anonymous says it, no doubt thousands of people throughout history also said it. It's the will to live triumphing over the will to truth. Any professional torturer could make you choose death fairly quickly. I'm thinking of how North Korea tortured the crew of the American spy ship USS Pueblo. As I recall, one elementary torture involved being struck across the face with a metal bar or maybe a heavy block of wood, hard enough to knock out teeth, and I remember reading of how one crewman shook with fear as their captors prepared to strike him again. I may have the details wrong but that's irrelevant. If you are forced to experience something really unbearable often enough, eventually you will want to die, just to make it stop.
0timtyler14y
Evolved creatures should rarely want to die . There are a few circumstances. If they can give their resources to their offspring, and that's the only way to do it. Some spiders do this by letting their offspring eat them - and there's the Praying Mantis. Or if they are infected with a plague that will kill everyone they meet - but that's hardly a common occurance. Torture would not normally be expected to be enough - the creatures should normally still feel the ecstacy of being alive - and prefer that to dying. While there's life there's hope.
4Mitchell_Porter14y
Actually wanting to die - as opposed to just executing a behavior which leads to your death - requires that you have the concept of your own death. Is there any evidence that any other species even has that concept? The idea of personally dying - the knowledge of death, not just as a phenomenon occurring to those external beings that aren't you, but as something that you, the subject of experience, can undergo - is one of hundreds of psychological and cognitive potentialities that apparently all came to the human race in a bundle, as a result of our extra intelligence or consciousness or whatever it is that made the difference. I strongly doubt that selection has had an opportunity to finetune the elements of that bundle, specifically so as to strengthen the conscious will to live or the ecstasy of being alive. There's certainly been memetic selection within historical times, but these higher complexities of subjectivity are so many, so multigenic in their origin, and so ambiguous in their effects, that I just don't see a sharp selective gradient. I would similarly assert that it's unlikely that there has been genetic selection driven by the advantage of having a consciously pro-natal attitude in the human race (except perhaps in historical time, as an adjunct to the much more obvious memetic selection in favor of reproduction; i.e. populations who are genetically more apt to host pro-natal memes should find their genes favored, though that might result from factors other than conscious life-affirmation). The superficial attractions which get young males and young females together, and the addictive pleasure of sex, look more like the work of selection. Those are traits where enhancement is clearly reproductively advantageous, and where it should be relatively easy for a mutation to affect their strength. Even if "degree of joy in life" were as easily re-set by simple mutation as those other traits, I don't think there has been remotely comparable opportunity for
0timtyler14y
That seems pretty obvious to me - animals are not stupid - but I don't really know what type of evidence you would accept. http://www.inquisitr.com/44905/amazing-photo-of-a-chimpanzee-funeral/ Anyway the premise seems wrong. By "wanting to die" all I meant was that the organsm engages in behaviour that leads to death. I wasn't suggesting spiders and praying mantis exhibited very much abstract thought. A plant can "want to die" - in that sense of the word. However, it is non-biological idea - something we don't expect to see much - and in fact don't see much.
0nick01200014y
I doubt that. In my utility function as it is now, both eternal torture and ceasing to exist are at negative infinity, but the negative infinity of ceasing to exist is to that of eternal torture as the set of real numbers is to the set of integers. Of course, that's all besides the point from my original question.
5Mitchell_Porter14y
This "utility function" is just an intellectual construct - I could even call it an ideological construct, the ideology being "I must not die, and I must not believe anything which might make me accept death, under any circumstances" - and has nothing to do with how you would actually choose under such harsh conditions. For that matter, the whole idea of literally never dying is not in any way evidence-based, it is pure existential determination. You are unusual in having wilfully chosen both Christianity and transhumanist immortalism. I know another transhumanist who converted to Islam, so maybe this combination of traditional religion and secular techno-transcendence has a future sociologically.
0Sniffnoy14y
I don't think that's meaningful. How's that work mathematically?
0JoshuaZ14y
I don't think doing this with cardinality works. One could however has a system where one did have incomparable levels of utility using the surreal number system. One could for example use this to deal with the torture v. dust specs. However, this seems to lead to essentially smuggling in deontological moral claims into a utilitarian system.
-2timtyler14y
Many others disagree with them. That seems inaccurate in this case - it seems to me a perfectly reasonable thing for people to discuss - in the course of trying to find ways of mitigating the problem.
-2Desrtopa13y
I don't know the content of the basilisk (I've heard that it's not a useful thing to know, in addition to being potentially stress inducing, so I do not want to want to know it,) so I'm not in much of a position to critique its similarity to knowledge of events like the Haiti earthquake. But given that we don't have the capacity to shelter anyone from the knowledge of tragedy and human fragility, or eternal torment such as that proposed by religious traditions, failing to censor such concepts is not a sign of inconsistency.