I would like to propose an idea for aligning AI.
First, I will provide some motivation for it. Suppose you are a programmer who's having a really hard time implementing a function in a computer program you're developing. Most of the code is fine, but there's this one function that you can't figure out how to implement right. But, you still need to run the program. So, to do this, you do the following: first, you add a breakpoint to the code in the function you're having trouble implementing. So, whenever you reach the function in the code, program execution halts. Once this happens, you on your own find a reasonable value v for the function to return. Finally, in your debugger you type "return v", making the function return v, and then you resume execution.
As long as you can come up with reasonable return values of the function on your own, then I bet the above would make the program work pretty well. And why not? Everything outside that function is implemented well. And you are manually making sure that hard-to-implement function also outputs reasonable values. So then there's no function that's not doing what it's supposed to do.
My basic idea is to do this, but with the AI's utility function.
Now, you don't need to literally put a breakpoint in the AI's utility function and then have the developers type into a debugger. Instead, inside the AI's utility function, you can just have the AI pause execution, send a message to a developer or other individual containing a description of a possible world, and then wait for a response. Once someone sends a message in response, the AI will use the returned value as the value of its utility function. That is, you could do something like:
def utility(outcome):
message_ai_controllers(make_readable(outcome))
response = wait_for_controller_response()
return parse_utility(response)
(Error-handling code could be added if the returned utility is invalid.)
Using the above utility function would, in theory at least, be equivalent to actually having a breakpoint in the code, then manually returning the right value with a debugger.
You might imagine this AI would be incredibly inefficient due to how slow people would be in answering the AI's queries. However, with the right optimization algorithm I'm not sure this would be much of a problem. The AI would have an extremely slow utility function, but I don't see a reason to think that it's impossible to make an optimization algorithm that can perform well on even on extremely slow objective functions.
I'll provide one potential approach to making such an algorithm. The optimization algorithm would, based on the known values of its objective function, learn fast approximations to it. Then, the AI could use these fast function to come up with a plan that scores well on them. Finally, if necessary, the AI can query its (slow) objective function for the value of the results of this plan. After doing so, it would also update its fast approximations with what its learned. The optimization algorithm could be designed so that if the AI is particularly unsure about if something would be desirable according to the objective function, it would consult the actual (slow) objective function. The algorithm could also potentially be programmed to do the same for any outcomes with high impact or strategic significance.
My technique is intended to provide both outer-alignment and corrigability. By directly asking the people for the desirability of outcomes, the AI would, if I'm reasoning correctly, be outer-aligned. If the AI uses fast approximations learned approximations to its utility function, then the system also provides a degree of hard-coded corrigability. The AI's optimization algorithm is hard-coded to query its slow utility function at some points and to update its fast models appropriately, which allows for errors in the fast approximations to be corrected.
This is a good point and one that I, foolishly, hadn't considered.
However, it seems to me that there is a way to get around this. Specifically, just provide the query-answerers the option to refuse to evaluate the utility of a description of a possible future. If this happens, the AI won't be able to have its utility function return a value for such a possible future.
To see how to do this, note that if a description of a possible future world is too large for the human to understand, then the human can refuse to provide a utility for it.
Similarly, if the description of the future doesn't specify the future with sufficient detail that the person can clearly tell if the described outcome would be good, then the person can also refuse to return a value.
For example, suppose you are making an AI designed to make paperclips. And suppose the AI queries the person asking for the utility of the possible future described by, "The AI makes a ton of paperclips". Then the person could refuse to answer, because the description is insufficient to specify the quality of the outcome, for example, because it doesn't say whether or not Earth got destroyed.
Instead, a possible future would only be rated as high utility if it says something like,"The AI makes a ton of paperclips, and the world isn't destroyed, and the AI doesn't take over the world, and no creatures get tortured anywhere in our Hubble sphere, and creatures in the universe are generally satisfied".
Does this make sense?
I, of course, could always be missing something.
(Sorry for the late response)