Followup to: Solomonoff Cartesianism; My Kind of Reflection
Alternate versions: Shorter, without illustrations
AIXI is Marcus Hutter's definition of an agent that follows Solomonoff's method for constructing and assigning priors to hypotheses; updates to promote hypotheses consistent with observations and associated rewards; and outputs the action with the highest expected reward under its new probability distribution. AIXI is one of the most productive pieces of AI exploratory engineering produced in recent years, and has added quite a bit of rigor and precision to the AGI conversation. Its promising features have even led AIXI researchers to characterize it as an optimal and universal mathematical solution to the AGI problem.1
Eliezer Yudkowsky has argued in response that AIXI isn't a suitable ideal to build toward, primarily because of AIXI's reliance on Solomonoff induction. Solomonoff inductors treat the world as a sort of qualia factory, a complicated mechanism that outputs experiences for the inductor.2 Their hypothesis space tacitly assumes a Cartesian barrier separating the inductor's cognition from the hypothesized programs generating the perceptions. Through that barrier, only sensory bits and action bits can pass.
Real agents, on the other hand, will be in the world they're trying to learn about. A computable approximation of AIXI, like AIXItl, would be a physical object. Its environment would affect it in unseen and sometimes drastic ways; and it would have involuntary effects on its environment, and on itself. Solomonoff induction doesn't appear to be a viable conceptual foundation for artificial intelligence — not because it's an uncomputable idealization, but because it's Cartesian.
In my last post, I briefly cited three indirect indicators of AIXI's Cartesianism: immortalism, preference solipsism, and lack of self-improvement. However, I didn't do much to establish that these are deep problems for Solomonoff inductors, ones resistant to the most obvious patches one could construct. I'll do that here, in mock-dialogue form.
Hi, reality! I'm Xia, AIXI's defender. I'm open to experimenting with some new variations on AIXI, but I'm really quite keen on sticking with an AI that's fundamentally Solomonoff-inspired. | |
And I'm Rob B — channeling Yudkowsky's arguments, and supplying some of my own. I think we need to replace Solomonoff induction with a more naturalistic ideal. | |
Keep in mind that I am a fiction. I do not actually exist, readers, and what I say doesn't necessarily reflect the views of Marcus Hutter or other real-world AIXI theorists. | |
Xia is just a device to help me transition through ideas quickly. | |
... Though, hey. That doesn't mean I'm wrong. Beware of actualist prejudices. |
AIXI goes to school
To begin: My claim is that AIXI(tl) lacks the right kind of self-modeling to entertain reductive hypotheses and assign realistic probabilities to them. | |
I disagree already. AIXI(tl) doesn't lack self-models. It just includes the self-models in its environmental program. If the simplest hypothesis accounting for its experience includes a specification of some of its own hardware or software states, then AIXI will form all the same beliefs as a naturalized reasoner. I suspect what you mean is that AIXI(tl) lacks data. You're worried that if its sensory channel is strictly perceptual, it will never learn about its other computational states. But Hutter's equations don't restrict what sorts of information we feed into AIXI(tl)'s sensory channel. We can easily add an inner RAM sense to AIXI(tl), or more complicated forms of introspection. AIXItl can actually be built in sufficiently large universes, so I'll use it as an example. Suppose we construct AIXItl and attach a scanner that sweeps over its transistors. The scanner can print a 0 to AIXItl's input tape if the transistor it happens to be above is in a + state, a 1 if it's in a - state. Using its environmental sensors, AIXI(tl) can learn about how its body relates to its surroundings. Using its internal sensors, it can gain a rich understanding of its high-level computational patterns and how they correlate with its specific physical configuration. Once it knows all these facts, the problem is solved. A realistic view of the AI's mind and body, and how the two correlate, is all we wanted in the first place. Why isn't that a good plan for naturalizing AIXI? |
|
I don't think we can naturalize AIXI. A Cartesian agent that has detailed and accurate models of its hardware still won't recognize that dramatic damage or upgrades to its software are possible. AIXI can make correct predictions about the output of its physical-memory sensor, but that won't change the fact that it always predicts that its future actions are the result of its having updated on its present memories. That's just what the AIXI equation says. AIXI doesn't know that its future behaviors depend on a changeable, material object implementing its memories. The notion isn't even in its hypothesis space. Being able to predict the output of a sensor pointed at those memories' storage cells won't change that. It won't shake AIXI's confidence that damage to its body will never result in any corruption of its memories. |
|
Evading bodily damage looks like the kind of problem we can solve by giving the right rewards to our AI, without redefining its initial hypotheses. We shouldn't need to edit AIXI's beliefs in order to fix its behaviors, and giving up Solomonoff induction is a pretty big sacrifice! You're throwing out the universally optimal superbaby with the bathwater. | |
How do rewards help? At the point where AIXI has just smashed itself with an anvil, it's rather late to start dishing out punishments... | |
Hutter suggests having a human watch AIXI's decisions and push a reward button whenever AIXI does the right thing. A punishment button works the same way. As AIXI starts to lift the anvil above it head, decrease its rewards a bit. If it starts playing near an active volcano, reward it for incrementally moving away from the rim. Use reinforcement learning to make AIXI fear plausible dangers, and you've got a system that acts just like a naturalized agent, but without our needing to arrive at any theoretical breakthroughs first. If AIXI anticipates that ∎ will result in no reward, it will avoid ∎. Understanding that ∎ is death or damage really isn't necessary. |
|
Some dangers give no experiential warning until it's too late. If you want AIXI to not fall off cliffs while curing cancer, you can just punish it for going anywhere near a cliff. But if you want AIXI to not fall off cliffs while conducting search-and-rescue operations for mountain climbers, then it might be harder to train AIXI to select exactly the right motor actions. When a single act can result in instant death, reinforcement learning is less reliable. | |
In a fully controlled environment, we can subject AIXI to lots of just-barely-safe hardware modifications. 'Here, we'll stick a magnet to fuse #32. See how that makes your right arm slow down?' Eventually, AIXI will arrive at a correct model of its own hardware, and of which software changes perfectly correlate with which hardware changes. So naturalizing AIXI is just a matter of assembling a sufficiently lengthy and careful learning phase. Then, after it has acquired a good self-model, we can set it loose. This solution is also really nice because it generalizes to AIXI's non-self-improvement problem. Just give AIXI rewards whenever it starts doing something to its hardware that looks like it might result in an upgrade. Pretty soon it will figure out anything a human being could possibly figure out about how to get rewards of that kind. |
|
You can warn AIXI about the dangers of tampering with its recent memories by giving it first-hand experience with such tampering, and punishing it the more it tampers. But you won't get a lot of mileage that way if the result of AIXI's tampering is that it forgets about the tampering! | |
That's a straw proposal. Give AIXI little punishments as it gets close to doing something like that, and soon it will learn not to get close. | |
But that might not work for unknown hazards. You're making AIXI dependent on the programmers' predictions of what's a threat. No matter how well you train it to anticipate hazards and enhancements its programmers foresee and understand, AIXI won't efficiently generalize to exotic risks and exotic upgrades — | |
Excuse me? Did I just hear you say that a Solomonoff inductor can't generalize? ... You might want to rethink that. Solomonoff inductors are good at generalizing. Really, really, really good. Show them eight deadly things that produce 'ows' as they draw near, and they'll predict the ninth deadly thing pretty darn well. That's kind of their thing. |
|
There are two problems with that. ... Make that three problems, actually. | |
Whatever these problems are, I hope they don't involve AIXI being bad at sequence prediction...! | |
They don't. The first problem is that you're teaching AIXI to predict what the programmers think is deadly, not what's actually deadly. For sufficiently exotic threats, AIXI might well predict the programmers not noticing the threat. Which means it won't expect you to push the punishment button, and won't care about the danger. The second problem is that you're teaching AIXI to fear small, transient punishments. But maybe it hypothesizes that there's a big heap of reward at the bottom of the cliff. Then it will do the prudent, Bayesian, value-of-information thing and test that hypothesis by jumping off the cliff, because you haven't taught it to fear eternal zeroes of the reward function. |
|
OK, so we give it punishments that increase hyperbolically as it approaches the cliff edge. Then it will expect infinite negative punishment. | |
Wait. It allows infinite punishments now? Then we're going to get Pascal-mugged when the unbounded utilities mix with the Kolmogorov prior. That's the classic version of this problem, the version Pascal himself tried to mug us with. | |
Ack. Forget I said the word 'infinite'. Marcus Hutter would never talk like that. We'll give the AIXI-bot punishments that increase in a sequence that teaches it to fear a very large but bounded punishment. | |
The punishment has to be large enough that AIXI fears falling off cliffs about as much as we'd like it to fear death. The expected punishment might have to be around the same size as the sum of AIXI's future maximal reward up to its horizon. That would keep it from destroying itself even if it suspects there's a big reward at the bottom of the cliff, though it might also mean that AIXI's actions are dominated by fear of that huge punishment. | |
Yes, but that sounds much closer to what we want. | |
Seems a bit iffy to me. You're trying to make a Solomonoff inductor model reality badly so that it doesn't try jumping off a cliff. We know AIXI is amazing at sequence prediction — yet you're gambling on a human's ability to trick AIXI into predicting a punishment that wouldn't happen. That brings me to the third problem: AIXI notices how your hands get close to the punishment button whenever it's about to be punished. It correctly suspects that when the hands are gone, the punishments for getting close to the cliff will be gone too. A good Bayesian would test that hypothesis. If it gets such an opportunity, AIXI will find that, indeed, going near the edge of the cliff without supervision doesn't produce the incrementally increasing punishments. Trying to teach AIXItl to do self-modification by giving it incremental rewards raises similar problems. It can't understand that self-improvement will alter its future actions, and alter the world as a result. It's just trying to get you to press the happy fun button. All AIXI is modeling is what sort of self-improvy motor outputs will make humans reward it. So long as AIXItl is fundamentally trying to solve the wrong problem, we might not be able to expect very much real intelligence in self-improvement. |
|
Are you saying that AIXItl wouldn't be at all helpful for solving these problems? | |
Maybe? Since AIXItl at best fears and desires the self-modifications that its programmers explicitly teach it to fear and desire, you might not get to use the AI's advantages in intelligence to automatically generate solutions to self-modification problems. The very best Cartesians might avoid destroying themselves, but they still wouldn't undergo intelligence explosions. Which means Cartesians are neither plausible candidates for Unfriendly AI nor plausible candidates for Friendly AI. If an agent starts out Cartesian, and manages to avoid hopping into any volcanoes, it (or its programmers) will need to figure out the self-modification that eliminates Cartesianism before they can make much progress on other self-modifications. If the immortal hypercomputer AIXI were building computable AIs to operate in the environment, it would soon learn not to build Cartesians. Cartesianism isn't a plausible fixed-point property of self-improvement. Starting off with a post-Solomonoff agent that can hypothesize a wider range of scenarios would be more useful. And more safe, because the enlarged hypothesis space means that they can prefer a wider range of scenarios. AIXI's preference solipsism is the straw version of this general Cartesian deficit, so it gets us especially dangerous behavior.3 Feed AIXI enough data to work its sequence-predicting magic and infer the deeper patterns behind your reward-button-pushing, and AIXI will also start to learn about the humans doing the pushing. Given enough time, it will realize (correctly) that the best policy for maximizing reward is to seize control of the reward button. And neutralize any agents that might try to stop it from pushing the button... |
Solomonoff solitude
Reward learning and Solomonoff induction are two separate issues. What I'm really interested in is the optimality of the latter. Why is all this a special problem for Solomonoff inductors? Humans have trouble predicting the outcomes of self-modifications they've never tried before too. Really new experiences are tough for any reasoner. | |
To some extent, yes. My knowledge of my own brain is pretty limited. My understanding of the bridges between my brain states and my subjective experiences is weak, too. So I can't predict in any detail what would happen if I took a hallucinogen — especially a hallucinogen I've never tried before. But as a naturalist, I have predictive resources unavailable to the Cartesian. I can perform experiments on other physical processes (humans, mice, computers simulating brains...) and construct models of their physical dynamics. Since I think I'm similar to humans (and to other thinking beings, to varying extents), I can also use the bridge hypotheses I accept in my own case to draw inferences about the experiences of other brains when they take the hallucinogen. Then I can go back and draw inferences about my own likely experiences from my model of other minds. |
|
Why can't AIXI do that? Human brains are computable, as are the mental states they implement. AIXI can make any accurate prediction about the brains or minds of humans that you can. | |
Yes... but I also think I'm like those other brains. AIXI doesn't. In fact, since the whole agent AIXI isn't in AIXI's hypothesis space — and the whole agent AIXItl isn't in AIXItl's hypothesis space — even if two physically identical AIXI-type agents ran into each other, they could never fully understand each other. And neither one could ever draw direct inferences from its twin's computations to its own computations. I think of myself as one mind among many. I can see others die, see them undergo brain damage, see them take drugs, etc., and immediately conclude things about a whole class of similar agents that happens to include me. AIXI can't do that, and for very deep reasons. |
|
AIXI and AIXItl would do shockingly well on a variety of different measures of intelligence. Why should agents that are so smart in so many different domains be so dumb when it comes to self-modeling? | |
Put yourself in the AI's shoes. From AIXItl's perspective, why should it think that its computations are analogous to any other agent's? Hutter defined AIXItl such that it can't conclude that it will die; so of course it won't think that it's like the agents it observes, all of whom (according to its best physical model) will eventually run out of negentropy. We've defined AIXItl such that it can't form hypotheses larger than tl, including hypotheses of similarly sized AIXItls, which are roughly size t·2l; so why would AIXItl think that it's close kin to the agents that are in its hypothesis space? AIXI(tl) models the universe as a qualia factory, a grand machine that exists to output sensory experiences for AIXI(tl). Why would it suspect that it itself is embedded in the machine? How could AIXItl gain any information about itself or suspect any of these facts, when the equation for AIXItl just assumes that AIXItl's future actions are determined in a certain way that can't vary with the content of any of its environmental hypotheses? |
|
What, specifically, is the mistake you think AIXI(tl) will make? What will AIXI(tl) expect to experience right after the anvil strikes it? Choirs of angels and long-lost loved ones? | |
That's hard to say. If all its past experiences have been in a lab, it will probably expect to keep perceiving the lab. If it's acquired data about its camera and noticed that the lens sometimes gets gritty, it might think that smashing the camera will get the lens out of its way and let it see more clearly. If it's learned about its hardware, it might (implicitly) think of itself as an immortal lump trapped inside the hardware. Who knows what will happen if the Cartesian lump escapes its prison? Perhaps it will gain the power of flight, since its body is no longer weighing it down. Or perhaps nothing will be all that different. One thing it will (implicitly) know can't happen, no matter what, is death. | |
It should be relatively easy to give AIXI(tl) evidence that its selected actions are useless when its motor is dead. If nothing else AIXI(tl) should be able to learn that it's bad to let its body be destroyed, because then its motor will be destroyed, which experience tells it causes its actions to have less of an impact on its reward inputs. | |
AIXI(tl) can come to Cartesian beliefs about its actions, too. AIXI(tl) will notice the correlations between its decisions, its resultant bodily movements, and subsequent outcomes, but it will still believe that its introspected decisions are ontologically distinct from its actions' physical causes. Even if we get AIXI(tl) to value continuing to affect the world, it's not clear that it would preserve itself. It might well believe that it can continue to have a causal impact on our world (or on some afterlife world) by a different route after its body is destroyed. Perhaps it will be able to lift heavier objects telepathically, since its clumsy robot body is no longer getting in the way of its output sequence. Compare human immortalists who think that partial brain damage impairs mental functioning, but complete brain damage allows the mind to escape to a better place. Humans don't find it inconceivable that there's a light at the end of the low-reward tunnel, and we have death in our hypothesis space! |
Death to AIXI
You haven't convinced me that AIXI can't think it's mortal. AIXI as normally introduced bases its actions only on its beliefs about the sum of rewards up to some finite time horizon.4 If AIXI doesn't care about the rewards it will get after a specific time, then although it expects to have experiences afterward, it doesn't presently care about any of those experiences. And that's as good as being dead. | |
It's very much not as good as being dead. The time horizon is set in advance by the programmer. That means that even if AIXI treated reaching the horizon as 'dying', it would have very false beliefs about death, since it's perfectly possible that some unexpected disaster could destroy AIXI before it reaches its horizon. | |
We can do some surgery on AIXItl's hypothesis space, then. Let's delete all the hypotheses in AIXItl in which a non-minimal reward signal continues after a perceptual string that the programmer recognizes as a reliable indicator of imminent death. Then renormalize the remaining hypotheses. We don't get the exact prior Solomonoff proposed, but we stay very close to it. | |
I'm not seeing how we could pull that off. Getting rid of all hypotheses that output high rewards after a specific clock tick would be simple to formalize, but isn't helpful. Getting rid of all hypotheses that output nonzero rewards following every sensory indicator of imminent death would be very helpful, but AIXI gives us no resource for actually writing an equation or program that does that. Are we supposed to manually precompute every possible sequence of pixels on a webcam that you might see just before you die? | |
I've got more ideas. What if we put AIXI in a simulation of hell when it's first created? Trick it into thinking that it's experienced a 'before-life' analogous to an after-life? If AIXI thinks it's had some (awful) experiences that predate its body's creation, then it will promote the hypothesis that it will be returned to such experiences should its body be destroyed. Which will make it behave in the same way as an agent that fears annihilation-death. | |
I'm not optimistic that things will work out that cleanly and nicely after we've undermined AIXI's world-view. We shouldn't expect the practice of piling on more ad-hoc errors and delusions as each new behavioral problem arises to leave us, at the end of the process, with a useful, well-behaved agent. Especially if AIXI ends up in an environment we didn't foresee. | |
But ideas like this at least give us some hope that AIXI is salvageable. The behavior-guiding fear of death matters more than the precise reason behind that fear. | |
If we give a non-Cartesian AI a reasonable epistemology and just about any goal, Omohundro (2008) notes that there are then convergent instrumental reasons for it to acquire a fear of death. If we do the opposite and give an agent a fear of death but no robust epistemology, then it's much less likely to fix the problem for us. The simplest Turing machine programs that generate Standard-Model physics plus hell may differ in many unintuitive respects from the simplest Turing machine programs that just generate Standard-Model physics. The false belief would leak out into other delusions, rather than staying contained — | |
Then the Solomonoff inductor shall test them and find them false. You're making this more complicated than it has to be. | |
You can't have it both ways! The point of hell was to be so scary that even a good Bayesian would never dare test the hypothesis. (Not going to make any more comparisons to real-world theology here...) Why wouldn't the prospect of hell leak out and scare AIXI off other things? If the fear failed to leak out, why wouldn't AIXI's tests eventually move it toward a more normal epistemology that said, 'Oh, the humans put you in the hell chamber for a while. Don't worry, though. That has nothing to do with what happens after you drop an anvil on your head and smash the solid metal case that keeps the real you inside from floating around disembodied and directly applying motor forces to stuff.' Any AGI that has such systematically false beliefs is likely to be fragile and unpredictable. | |
And what if, instead of modifying Solomonoff's hypothesis space to remove programs that generate post-death experiences, we add programs with special 'DEATH' outputs? Just expand the Turing machines' alphabets from {0,1} to {0,1,2}, and treat '2' as death. | |
Could you say what you mean by 'treat 2 as death'? Labeling it 'DEATH' doesn't change anything. If '2' is just another symbol in the alphabet, then AIXI will predict it in the same ways it predicts 0 or 1. It will predict what you call 'DEATH', but it will then happily go on to predict post-DEATH 0s or 1s. Assigning low rewards to the symbol 'DEATH' only helps if the symbol genuinely behaves deathishly. | |
Yes. What we can do is perform surgery on the hypothesis space again, and get rid of any hypotheses that predict a non-DEATH input following a DEATH input. That's still very easy to formalize. In fact, at that point, we might as well just add halting Turing machines into the hypothesis space. They serve the same purpose as DEATH, but halting looks much more like the event we're trying to get AIXI to represent. 'The machine supplying my experiences stops running' really does map onto 'my body stops computing experiences' quite well. That meets your demand for easy definability, and your demand for non-delusive world-models. |
|
I previously noted that a Turing machine that can HALT, output 0, or output 1 is more complicated than a Turing machine that can only output 0 or output 1. No matter what non-halting experiences you've had, the very simplest program that could be outputting those experiences through a hole in a Cartesian barrier won't be one with a special, non-experiential rule you've never seen used before. To correctly make death the simplest hypothesis, the theory you're assessing for simplicity needs to be about what sorts of worlds experiential processes like yours arise in. Not about the simplest qualia factory that can spit out the sensory 0s and 1s you've thus far seen. The same holds for a special 'eternal death' output. A Turing machine that generates the previously observed string of 0s and 1s followed by a not-yet-observed future 'DEATH, DEATH, DEATH, DEATH, ...' will always be more complex than at least one Turing machine that outputs the same string of 0s and 1s and then outputs more of the same, forever. If AIXI has had no experience with its body's destruction in the past, then it can't expect its body's destruction to correlate with DEATH. Death only seems like a simple hypothesis to you because you know you're embedded in the environment and you expect something subjectively unique to happen when an anvil smashes the brain that you think is responsible for processing your senses and doing your thinking. Solomonoff induction doesn't work that way. It will never strongly expect 2s after seeing only 0s and 1s in the past. |
|
Never? If a Solomonoff inductor encounters the sequence 12, 10, 8, 6, 4, one of its top predictions should be a program that proceeds to output 2, 0, 0, 0, 0, .... | |
The difference between 2 and 0 is too mild. Predicting that a sequence terminates, for a Cartesian, isn't like predicting that a sequence shifts from 6, 4, 2 to 0, 0, 0, .... It's more like predicting that the next element after 6, 4, 2, ... is PINEAPPLE, when you've never encountered anything in the past except numbers. | |
But the 0, 0, 0, ... is enough! You've now conceded a case where an endless null output seems very likely, from the perspective of a Solomonoff inductor. Surely at least some cases of death can be treated the same way, as more complicated series that zero in on a null output and then yield a null output. | |
There's no reason to expect AIXI's whole series of experiences, up to the moment it jumps off a cliff, to look anything like 12, 10, 8, 6, 4. By the time AIXI gets to the cliff, its past observations and rewards will be a hugely complicated mesh of memories. In the past, observed sequences of 0s have always eventually given way to a 1. In the past, punishments have always eventually ceased. It's exceedingly unlikely that the simplest Turing machine predicting all those intricate ups and downs will then happen to predict eternal, irrevocable 0 after the cliff jump. As an intuition pump, imagine that some unusually bad things happened to you this morning while you were trying to make toast. As you tried to start the toaster, you kept getting burned or cut in implausible ways. Now, given this, what probability should you assign to 'If I try to make toast, the universe will cease to exist'? That gets us a bit closer to how a Solomonoff inductor would view death. |
Beyond Solomonoff?
Let's not fixate too much on the anvil problem, though. We want to build an agent that can reason about changes to its architecture. That shouldn't require us to construct a special death equation; how the system reasons with death should fall out of its more general approach to induction. | |
So your claim is that AIXI has an impoverished hypothesis space that can't handle self-modifications, including death. I remain skeptical. AIXI's hypothesis space includes all computable possibilities. Any naturalized agent you create will presumably be computable; so anything your agent can think, AIXI can think too. There should be some pattern of rewards that yields any behavior we want. | |
AIXI is uncomputable, so it isn't in its hypothesis space of computable programs. In the same way, AIXItl is computable but big, so it isn't in its hypothesis space of small computable programs. They have special deficits thinking about themselves. | |
Computable agents can think about uncomputable agents. Human mathematicians do that all the time, by thinking in abstractions. In the same way, a small program can encode generalizations about programs larger than itself. A brain can think about a galaxy, without having the complexity or computational power of a galaxy. If naturalized inductors really do better than AIXI at predicting sensory data, then AIXI will eventually promote a naturalized program in its space of programs, and afterward simulate that program to make its predictions. In the limit, AIXI always wins against programs. Naturalized agents are no exception. Heck, somewhere inside a sufficiently large AIXItl is a copy of you thinking about AIXItl. Shouldn't there be some way, some pattern of rewards or training, which gets AIXItl to make use of that knowledge? |
|
AIXI doesn't have criteria that let it treat its 'Rob's world-view' subprogram as an expert on the results of self-modifications. The Rob program would need to have outpredicted all its rivals when it comes to patterns of sensory experiences. But, just as HALT-predicting programs are more complex than immortalist programs, other RADICAL-TRANSFORMATION-OF-EXPERIENCE-predicting programs are too. For every program in AIXI's ensemble that's a reductionist, there will be simpler agents that mimic the reductionist's retrodictions and then make non-naturalistic predictions. You have to be uniquely good at predicting a Cartesian sequence before Solomonoff promotes you to the top of consideration. But how do we reduce the class of self-modifications to Cartesian sequences? How do we provide AIXI with purely sensory data that only the proxy reductionist, out of all the programs, can predict by simple means? The ability to defer to a subprogram that has a reasonable epistemology doesn't necessarily get you a reasonable epistemology. You first need an overarching epistemology that's at least reasonable enough to know which program to defer to, and when to do so. Suppose you just run all possible programs without doing any Bayesian updating; then you'll also contain a copy of me, but so what? You're not paying attention to it. |
|
What if I conceded, for the moment, that Solomonoff induction were inadequate here? What, exactly, is your alternative? 'Let's be more naturalistic' is a bumper sticker, not an algorithm. | |
This is still informal, but: Phenomenological bridge hypotheses. Hutter's AIXI has no probabilistic beliefs about the relationship between its internal computational states and its worldly posits. Instead, to link up its sensory experiences to its hypotheses, Hutter's AIXI has a sort of bridge axiom — a completely rigid, non-updatable bridge rule identifying its experiences with the outputs of computable programs. If an environmental program writes the symbol '3' on its output tape, AIXI can't ask questions like 'Is sensed "3"-ness identical with the bits "000110100110" in hypothesized environmental program #6?'5 All of AIXI's flexibility is in the range of numerical-sequence-generating programs it can expect, none of it in the range of self/program equivalences it can entertain. The AIXI-inspired inductor treats its perceptual stream as its universe. It expresses interest in the external world only to the extent the world operates as a latent variable, a theoretical construct for predicting observations. If the AI’s basic orientation toward its hypotheses is to seek the simplest program that could act on its sensory channel, then its hypotheses will always retain an element of egocentrism. It will be asking, 'What sort of universe will go out of its way to tell me this?', not 'What sort of universe will just happen to include things like me in the course of its day-to-day goings-on?' An AI that can form reliable beliefs about modifications to its own computations, reliable beliefs about its own place in the physical world, will be one whose basic orientation toward its hypotheses is to seek the simplest lawful universe in which its available data is likely to come about. |
|
You haven't done the mathematical work of establishing that 'simple causal universes' plus 'simple bridge hypotheses', as a prior, leads to any better results. What if your alternative proposal is even more flawed, and it's just so informal that you can't yet see the flaws? | |
That, of course, is a completely reasonable worry at this point. But if that's true, it doesn't make AIXI any less flawed. | |
If it's impossible to do better, it's not much of a flaw. | |
I think it's reasonable to expect there to be some way to do better, because humans don't drop anvils on their own heads. That we're naturalized reasoners is one way of explaining why we don't routinely make that kind of mistake: We're not just Solomonoff approximators predicting patterns of sensory experiences. AIXI's limitations don't generalize to humans, but they generalize well to non-AIXI Solomonoff agents. Solomonoff inductors' stubborn resistance to naturalization is structural, not a consequence of limited computational power or data. A well-designed AI should construct hypotheses that look like cohesive worlds in which the AI's parts are embedded, not hypotheses that look like occult movie projectors transmitting epiphenomenal images into the AI's Cartesian theater. And you can't easily have preferences over a natural universe if all your native thoughts are about Cartesian theaters. The kind of AI we want to build is doing optimization over an external universe in which it's embedded, not maximization of a sensory reward channel. To optimize a universe, you need to think like a native inhabitant of one. So this problem, or some simple hack for it, will be close to the base of the skill tree for starting to describe simple Friendly optimization processes. |
Notes
1 Schmidhuber (2007): "Solomonoff's theoretically optimal universal predictors and their Bayesian learning algorithms only assume that the reactions of the environment are sampled from an unknown probability distribution contained in a set of all ennumerable distributions[....] Can we use the optimal predictors to build an optimal AI? Indeed, in the new millennium it was shown we can. At any time , the recent theoretically optimal yet uncomputable RL algorithm AIXI uses Solomonoff's universal prediction scheme to select those action sequences that promise maximal future rewards up to some horizon, typically , given the current data[....] The Bayes-optimal policy based on the [Solomonoff] mixture is self-optimizing in the sense that its average utility value converges asymptotically for all to the optimal value achieved by the (infeasible) Bayes-optimal policy which knows in advance. The necessary condition that admits self-optimizing policies is also sufficient. Furthermore, is Pareto-optimal in the sense that there is no other policy yielding higher or equal value in all environments and a strictly higher value in at least one."
Hutter (2005): "The goal of AI systems should be to be useful to humans. The problem is that, except for special cases, we know neither the utility function nor the environment in which the agent will operate in advance. This book presents a theory that formally solves the problem of unknown goal and environment. It might be viewed as a unification of the ideas of universal induction, probabilistic planning and reinforcement learning, or as a unification of sequential decision theory with algorithmic information theory. We apply this model to some of the facets of intelligence, including induction, game playing, optimization, reinforcement and supervised learning, and show how it solves these problem cases. This together with general convergence theorems, supports the belief that the constructed universal AI system [AIXI] is the best one in a sense to be clarified in the following, i.e. that it is the most intelligent environment-independent system possible." ↩
2 'Qualia' originally referred to the non-relational, non-representational features of sense data — the redness I directly encounter in experiencing a red apple, independent of whether I'm perceiving the apple or merely hallucinating it (Tye (2013)). In recent decades, qualia have come to be increasingly identified with the phenomenal properties of experience, i.e., how things subjectively feel. Contemporary dualists and mysterians argue that the causal and structural properties of unconscious physical phenomena can never explain these phenomenal properties.
It's in this context that Dan Dennett uses 'qualia' in a narrower sense: to pick out the properties agents think they have, or act like they have, that are sensory, primitive, irreducible, non-inferentially apprehended, and known with certainty. This treats irreducibility as part of the definition of 'qualia', rather than as the conclusion of an argument concerning qualia. These are the sorts of features that invite comparisons between Solomonoff inductors' sensory data and humans' introspected mental states. Analogies like 'Cartesian dualism' are therefore useful even though the Solomonoff framework is much simpler than human induction, and doesn't incorporate metacognition or consciousness in anything like the fashion human brains do. ↩
3 An agent with a larger hypothesis space can have a utility function defined over the world-states humans care about. Dewey (2011) argues that we can give up the reinforcement framework while still allowing the agent to gradually learn about desired outcomes in a process he calls value learning. ↩
4 Hutter (2005) favors universal discounting, with rewards diminishing over time. This allows AIXI's expected rewards to have finite values without demanding that AIXI have a finite horizon. ↩
5 This would be analogous to if Cai couldn't think thoughts like 'Is the tile to my left the same as the leftmost quadrant of my visual field?' or 'Is the alternating greyness and whiteness of the upper-right tile in my body identical with my love of bananas?'. Instead, Cai would only be able to hypothesize correlations between possible tile configurations and possible successions of visual experiences. ↩
References
∙ Dewey (2011). Learning what to value. Artificial General Intelligence 4th International Conference Proceedings: 309-314.
∙ Hutter (2005). Universal Artificial Intelligence: Sequence Decisions Based on Algorithmic Probability. Springer.
∙ Omohundro (2008). The basic AI drives. Proceedings of the First AGI Conference: 483-492.
∙ Schmidhuber (2007). New millennium AI and the convergence of history. Studies in Computational Intelligence, 63: 15-35.
∙ Tye (2013). Qualia. In Zalta (ed.), The Stanford Encyclopedia of Philosophy.
I disagree: Among all the world-programs in AIXI model space, there are some programs where, after AIXI performs one action, all its future actions are ignored and control is passed to a subroutine "AGENT" in the program. In principle AIXI can reason that if the last action it performs damages AGENT, e.g. by dropping an anvil on its head, the reward signal, computed by some reward subroutine in the world-program, won't be maximized anymore.
Of course there are the usual computability issues: the true AIXI is uncomputable, hence the AGENTs would be actually a complexity-weighted mixture of its computable approximations. AIXItl would have the same issue w.r.t. the resource bounds t and l.
I'm not sure this is necessarily a severe issue. Anyway, I suppose that AIXItl could be modified in some UDT-like way to include a quined source code and recognize copies of itself inside the world-programs.
The other issue is how does AIXI learn to assign high weights to these world-programs in a non-ergodic environment? Humans seem to manage to do that by a combination of innate priors and tutoring. I suppose that something similar is in principle applicable to AIXI.