A putative new idea for AI control; index here.
This is a potential design for a safe-ish satisficer, based on the ideas presented here. The two key ideas are that a satisficer S(u) with utility u:
- Would not effectively aid M(u), a u-maximiser.
- Would not effectively resist M(-u), a u-minimizer.
So satisficer make poor allies and weak enemies.
The design, the designer and the verifier
Let M(u-v) be an agent that maximises u and minimises v. Let's assume that M(u-v) is currently unaware as to what u and v are (though it knows that the upper bound of u is easy to approach). It can interact with the outside environment in one way only: by creating a satisficer S(). S() will then be fed u to become S(u).
However, the design for S(u) will first be sent to M(εu+v), for some unknown small ε. Note that M(εu+v) is a v-maximiser, so it disagrees with M(u-v) except for u. M(εu+v) is a (hypothetical) agent that will be (counterfactually) released at some unspecified later date. Currently, it will be fed the whole reasoning history of M(u+v), and will make a single decision only: where S(u) is to be built or not.
So it seems that S(u) must;
- Set u close to its maximal value (as this is "easy to approach").
- Not increase v to any significant extent (or else M(u-v) would not design it).
- Not decrease v to any significant extent (or else M(εu+v) would veto it).
- Since v is unknown to M(u-v) and and resources are finite, this should serve as a general reduced impact requirement for S(u) (we may have to use something like a soft minimum across all v, rather than an expectation across all v, to avoid certain edge casess).
- Since is u unknown to M(u-v), S() would serve as a general satisficing agent for any utility functions whose upper bounds are easy to approach (remember that we can take an arbitrary utility function and arbitrarily bound it at some number).
For the moment, this does seems like it would produce a successful satisficer...
Because then we could have a paperclip-making AI (or something similar) that doesn't breakout and do stupid things all over the place.
That's indeed the case, but I wanted to emphasise the difference between how they treat u and how they treat v.
I'm not clear either, which is why this is an initial idea.
Alternatively, consider a case where Alice wants to bake a cake, and can either bake a simple cake or optimise the world into a massive cake baking machine. The idea here is that Alice will be stopped at some point along the way.
Not knowing v is supposed to help with these situations: without knowing the values you want to minimise harm to, your better option is to not do too much.
My intended point with that example was to question what it means for v to be at 0, 1, or -1. If v is defined to be always non-negative (something like "estimate the volume of the future that is 'different' in some meaningful way"), then flipping the direction of v makes sense. But if v is some measure of how happy Bob is, then flipping the direction of v means that we're trying to find a pl... (read more)