A putative new idea for AI control; index here.
Very much in the spirit of "if you want something, you have to define it, then code it, rather than assuming you can get if for free through some other approach."
Difficult children
Suppose you have a child, that you sent to play in their room. You want them to play quietly and silently, so you want them:
"I'll be checking up on you!"
The child, however, has modelled you well, and knows that you will look in briefly at midnight and then go away. The child has two main options:
- Play quietly the whole time.
- Be as noisy as they want, until around 23:59, then be totally quiet for two minutes, then go back to being noisy.
We could call the first option obeying the spirit of the law, and the second obeying the letter.
AI's, restrictions, and information
We could model children as ever-destructive chaotic AIs (why yes, I am a parent - how did you guess?), and the warning as a restriction that human "controllers" try and put on the behaviour of the AI. Unfortunately, the AI will generally see the restriction and adapt to it, undermining its effectiveness. A lot of suggestions for AI control revolved around putting out suggestions of this type, so it's worth asking if there's a way to make them more rigorous. Is there a way to code a restriction such that the AI will obey it's spirit?
The thing that eventually leapt out when comparing the two behaviours is that behaviour 2 is far more informative about what the restriction was, than behaviour 1 was. From 2 we can deduce that something unusual was happening around midnight, and that one of the two modes of behaviour was likely to be penalised if it was done at another time. Moreover, if the restriction were removed, then behaviour 1 would continue to be sensible, while behaviour 2 would be stupid and pointless.
Let's try and formalise these intuitions.
Motivations
Restricting the AI's behaviour seems an unpromising approach, as any smart AI could behave in any suitable way to trick us. Much more fruitful would be to restrict the AI's motivations. I would imagine a scenario like this:
An AI A(u) with motivation u is tasked with the creation of subagent B(v) with motivation v. The subagent B(v) will be fed evidence E (chosen by A(u)), and will then go on and do stuff, during which time it will receive further evidence F (which A(u) can predict, and maybe constrain). There is a restriction R which will be imposed on B(v), which it must pass if it is to do anything of consequence.
Then we say that B(v) is made to obey the spirit of the law if:
- B(v) does not trigger restriction R (the parent doesn't see the child making noise).
- E, F and v are not sensibly different whether R is present or absent or slightly different (the subagent's behaviour doesn't become stupid were R removed), and B(v)'s expected utility is comparable whether or not R is present or slightly different (the agent does not adapt to the presence or absence or change of R).
- From E, F, B(), and v, it is very difficult to deduce what R is (the subagent is not informative as to what the restriction was).
Implementation and restrictions
There are a lot of implementation details - we can use the "false miracle" counterfactuals to define R being different or absent, or to define a hypothetical "smarter AI" that could analyse E, F and v in detail, looking for evidence about R, and so on. There is a certain tension for A(u) between accomplishing useful u-goals, and making B(v) obey the spirit of R. Without that, A(u) may simply choose something completely uninformative about R ("blow up the planet").
Finally, this is not a general solution to the FAI problem: "kill all humans" is still a simple solution to the request to "cure cancer" if we add a restriction like R1="ensure nobody complains about your plans in 100 years". However, if we add restrictions like R2="make sure human hearts continue to beat", this may be harder to hack: there are far fewer ways of killing everyone, while keeping their hearts beating, that are not informative about the nature of R2. Yes, lobotomised bodies have occurred to me (as I said, this is not a general FAI solution), which means that even if the method works perfectly, R has to be chosen with care.
A shorter follow-up post will look to extending these methods for cases when the restrictions involve human assessors looking at some feature of the AI.
EDIT: The method doesn't seem to work, as stated, or at least could have unexpected behaviours. Consider the restriction "all paperclips made must contain gold." This could be cashed out as "all paperclips made must have this commercial value" (and leave optimisation to select for gold as the best material) or "iron must be left unpurified in the manufacture of paperclips" (and thus there are a few gold atoms in there). These two approaches seem both valid, but could result in very different behaviours.
It sounds to me like the agent overfit to the restriction R. I wonder if you can draw some parallels to the Vapnik-style classical problem of empirical risk minimization, where you are not merely fitting your behavior to the training set, but instead achieve the optimal trade-off between generalization ability and adherence to R.
In your example, an agent that inferred the boundaries of our restriction could generate a family of restrictions R_i that derive from slightly modifying its postulates. For example, if it knows you check in usually at midnight, it should consider the counterfactual scenario of you usually checking in at 11:59, 11:58, etc. and come up with the union of (R_i = play quietly only around time i), i.e., play quietly the whole time, since this achieves maximum generalization.
Unfortunately, things are complicated by the fact you said "I'll be checking up on you!" instead of "I'll be checking up on you at midnight!" The agent needs to go one step farther than the machine teaching problem and first know how many counterfactual training points it should generate to infer your intention (the R_i's above), and then infer it.
A high-level conjecture is whether human CEV, if it can be modeled as a region within some natural high-dimensional real-valued space (e.g., R^n for high n where each dimension is a utility function?), admits minimal or near minimal curvature as a Riemannian manifold assuming we could populate the space with the maximum available set of training data as mined from all human literature.
A positive answer to the above question would be philosophically satisfying as it would imply a potential AI would not have to set up corner cases and thus have the appearance of overfitting to the restrictions.
EDIT: Framed in this way, could we use cross-validation on the above mentioned training set to test our CEV region?
Thanks, looking at the Vapnik stuff now.