paulfchristiano comments on Concept Safety: Producing similar AI-human concept spaces - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
That may or may not be a problem with the simplest version 1 of the idea, but it is not a problem in version 2 which imposes more realistic priors/constraints and also uses model pretraining on just state transitions to force differentiation of the model and reward functions.
Ok, I think we are kindof in agreement, but first let me recap where we are. This all started when I claimed that your 'easy IRL problem' - solve IRL given infinite compute and infinite perfect training data - is relatively easy and could probably be done in 100 lines of math. We both agreed that supervised learning (reproducing the training set - the modal human policy) would be obviously easy in this setting.
After that the discussion forked and got complicated - which I realize in hindsight - stems from not clearly specifying what would entail success. So to be more clear - success of the IRL approach can be measured as improvement over supervised learning - as measured in the recovered utility function. Which of course leads to this whole other complexity - how do we know that is the 'true utility function' - leave that aside for a second, and I'll get back to it.
I then brought up a concrete example of using IRL on an deep RL Atari agent. I described how learning the score function should be relatively straightforward, and this would allow an IRL agent to match the performance of the RL agent in this domain, which leads to better performance than the supervised/modal human policy.
You agreed with this:
So it seems we have agreed that IRL surpassing the modal human policy is clearly possible - at least in the limited domain of atari.
If we already know the utility function apriori, then obviously IRL given the same resources can only do as good as RL. But that isn't that interesting, and remember IRL can do much more - as in the example of learning to maximize score while under other complex constraints.
So in scaling up to more general problem domains, we have the issue of modelling mistakes - which you seem to be especially focused on - and the related issue of utility function uniqueness.
Versions 2 and later of my simple proto-proposal use more informed priors for the circuit complexity combined with pretraining the model on just observations to force differentiate the model and utility functions. In the case of atari, getting the utility function to learn the score should be relatively easy - as we know it is a simple immediate visual function.
This type of RL architecture can model human's limited rationality by bounding the circuit complexity - at least that's the first step. We could get increasingly more accurate models of the human decision surface by incorporating more of the coarse abstract structure of the brain as a prior over our model space.
Ok, so backing up a bit :
For the full AGI problem, I am aware of a couple of interesting candidates for an intrinsic reward/utility function - the future freedom of action principle (power) and the compression progress measure (curiosity). If scaled up to superhuman intelligence, I think/suspect you would agree that both of these candidates are probably quite dangerous. On the other hand, they seem to capture some aspects of human's intrinsic motivators, so they may be useful as subcomponents or features.
The IRL approach - if taken all the way - seems to require reverse engineering the brain. It could be that any successful route to safe superintelligence just requires this - because the class of agents that combine our specific complex unknown utility functions with extrapolated superintelligence necessarily can only be specified in reference to our neural architecture as a starting point.
This sounds really interesting and important (if true), but I have only a vague understanding of how you arrived at this conclusion. Please consider writing a post about it.
It's not so much a conclusion as an intuition, and most of the inferences leading up to it are contained in this thread with PaulChristiano and a related discussion with Kaj Sotala.
I'm interested in IRL and I think it's the most promising current candidate for value learning, but I must admit I haven't read much of the relevant literature yet. Reading up on IRL and writing a discussion post on it has been on my todo list - your comment just bumped it up a bit. :)
Another related issue is the more general question of how the training data/environment determines/shapes safety issues for learning agents.
My reaction when I first came across IRL is similar to this author's:
But maybe it's not a bad approach for solving a hard problem to first solve a very simplified version of it, then gradually relax the simplifying assumptions and try to build up to a solution of the full problem.
As a side note, that author's attempt at value learning is likely to suffer from the same problem Christiano brought up in this thread - there is nothing to enforce that the optimization process will actually nicely separate the reward and agent functionality. Doing that requires some more complex priors and or training tricks.
The author's critique about limiting assumptions may or may not be true, but the author only quotes a single paper from the IRL field - and its from 2000. That paper and it's follow up both each have 500+ citations, and some of the newer work with IRL in the title is from 2008 or later. Also - most of the related research doesn't use IRL in the title - ie "Probabilistic reasoning from observed context-aware behavior".
This is actually the mainline successful approach in machine learning - scaling up. MNIST is a small 'toy' visual learning problem, but it lead to CIFAR10/100 and eventually ImageNet. The systems that do well on ImageNet descend from the techniques that did well on MNIST decades ago.
MIRI/LW seems much more focused on starting with a top-down approach where you solve the full problem in an unrealistic model - given infinite compute - and then scale down by developing some approximation.
Compare MIRI/LW's fascination with AIXI vs the machine learning community. Searching for "AIXI" on r/machinelearning gets a single hit vs 634 results on lesswrong. Based on #citations of around 150 or so, AIXI is a minor/average paper in ML (more minor than IRL), and doesn't appear to have lead to great new insights in terms of fast approximations to bayesian inference (a very active field that connects mostly to ANN research).
MIRI is taking the top-down approach since that seems to be the best way to eventually obtain an AI for which you can derive theoretical guarantees. In the absence of such guarantees, we can't be confident that an AI will behave correctly when it's able to think of strategies or reach world states that are very far outside of its training and testing data sets. The price for pursuing such guarantees may well be slower progress in making efficient and capable AIs, with impressive and/or profitable applications, which would explain why the mainstream research community isn't very interested in this approach.
I tend to agree with MIRI that the top-down approach is probably safest, but since it may turn out to be too slow to make any difference, we should be looking at other approaches as well. If you're thinking about writing a post about recent progress in IRL and related ideas, I'd be very interested to see it.
I for one remain skeptical such theoretical guarantees are possible in principle for the domain of general AI. The utility of formal math towards a domain tends to vary inversely with domain complexity. For example in some cases it may be practically possible to derive formal guarantees about the full output space of a program, but not when that program is as complex as a modern video game, or let alone a human. The equivalent of theoretical guarantees may be possible/useful for something like a bridge, but less so for an airplane or a city.
For complex systems simulations are the key tool that enables predictions about future behavior.
This indeed would be a problem if the AI's training ever stopped, but I find this extremely unlikely. Some AI systems already learn continuously - whether using online learning directly or by just frequently patching the AI with the results of updated training data. Future AI systems will continue this trend - and learn continuously like humans.
Much depends on one's particular models for how the future of AI will pan out. I contend that AI does not need to be perfect, just better than humans. AI drivers don't need to make optimal driving decisions - they just need to drive better than humans. Likewise AI software engineers just need to code better than human coders, and AI AI researchers just need to do their research better than humans. And so on.
For the record, I do believe that MIRI is/should be funded at some level - it's sort of a moonshot, but one worth taking given the reasonable price. Mainstream opinion on the safety issue is diverse, and their are increasingly complex PR and career issues to consider. For example corporations are motivated to downplay long term existential risks, and in the future will be motivated to downplay similarity between AI and human cognition to avoid regulation.
Cool - I'm working up to it.
Sure, but when it comes to learning values, I see a few problems even with continuous learning:
My point was that an AI could do well on test data, including simulations, but get tripped up at some later date (e.g., it over-confidently thinks that a certain world state would be highly desirable). Another way things could go wrong is that an AI learns wrong values, but does well in simulations because it infers that it's being tested and tries to please the human controllers in order to be released into the real world.
I generally agree that learning values correctly will be a challenge, but it's closely related to general AGI challenges.
I'm also reasonably optimistic that we will be able to reverse engineer the brain's value learning mechanisms to create agents that are safer than humans. Fully explaining the reasons behind that cautious optimism would require a review of recent computational neuroscience (the LW consensus on the brain is informed primarily by a particular narrow viewpoint from ev psych and the H&B literature, and this position is in substantial disagreement with the viewpoint from comp neuroscience.)
Mostly agreed. However it is not clear that actively deferring to humans is strictly necessary. In particular one route that circumvents most of these problems is testing value learning systems and architectures on a set of human-level AGIs contained to a virtual sandbox where the AGI does not know it is in a sandbox. This allows safe testing of designs to be used outside of the sandbox. The main safety control is knowledge limitation (which is something that MIRI has not considered much at all, perhaps because of their historical anti-machine learning stance).
The fooling CNN stuff does not show a particularly important failure mode for AI. These CNNs are trained only to recognize images in the sense of outputting a 10 bit label code for any input image. If you feed them a weird image, they just output the closest category. The fooling part (getting the CNN to misclassify an image) specifically requires implicitly reverse engineering the CNN and thus relies on the fact that current CNNs are naively deterministic. A CNN with some amount of random sampling based on a secure irreversible noise generator would not have this problem.
This could be a problem, but even today our main technique to speed up AI learning relies more on parallelization than raw serial speedup. The standard technique involves training 128 to 1024 copies of the AI in parallel, all on different data streams. The same general technique would allow an AI to learn values from large number of humans in parallel. This also happens to automatically solve some of the issues with value representativeness.
The current world is already exotic from the perspective of our recent ancestors. We already have some methods to investigate the interaction of our values with exotic future world states: namely our imagination, as realized in thought experiments and especially science fiction. AI could help us extend these powers.
This is just failure to generalize or overfitting, and how to avoid these problems is much of what machine learning is all about.
This failure requires a specific combination of: 1. that the AI learns a good model of the world, but 2. learns a poor model of human values, and 3. learns that it is in a sim. 4. wants to get out. 5. The operators fail to ever notice any of 2 through 4.
Is this type of failure possible? Sure. But the most secure/paranoid type of safety model I envision is largely immune to that class of failures. In the most secure model, potentially unsafe new designs are constrained to human-level intelligence and grow up in a safe VR sim (medieval or earlier knowledge-base). Designs which pass safety tests are then slowly percolated up to sims which are closer to the modern world. Each up migration step is like reincarnation - a new AI is grown from a similar seed. The final designs (seed architectures rather than individual AIs) that pass this vetting/testing process will have more evidence for safety/benevolence/altruism than humans.
Sounds like another post to look forward to.
I think we'll need different methods to deal with future exoticness though. See this post for some of the reasons.
Do you envision biological humans participating in the VR sim, in order to let the AI learn values from them? If so, how to handle speed differences that may be up to a factor of millions (which you previously suggested will be the case)? Only thing I can think of is to slow the AI down to human speed for the training, which might be fine if your AI group has a big lead and you know there aren't any other AIs out there able to run at a million times human speed. Otherwise, even if you could massively parallelize the value learning and finish it in one day of real time, that could be giving a competitor a millions days of subjective time (times how many parallel copies of the AI they can spawn) to make further progress in AI design and other technologies.
Safer than humans seems like a pretty low bar to me, given that I think most humans are terribly unsafe. :) But despite various problems I see with this approach, it may well be the best outcome that we can realistically hope for, if mainstream AI/ML continues to make progress at such a fast pace using designs that are hard to reasonable about formally.