Overall summary post here.
In a previous post, I asked which impact measures were vulnerable to subagents. Vika pointed out that it was not merely an issue of of the impact measure, but also of the baseline.
This is indeed the case, but the nature of the impact measure is still relevant. In this post, I'll establish two facts: that under the stepwise inaction baseline, a subagent completely undermines all impact measures (including twenty billion questions). And for the inaction baseline, for non-indexical impact measures, a subagent will not change anything.
The next post will delve into the very interesting things that happen with the inaction baseline and an indexical impact measure.
Stepwise inaction baseline
All impact measures are some form of distance measure between two states, of the form , where is the state the agent is actually in, and is some baseline state to compare with.
For the stepwise inaction baseline, is calculated as follows. Let be the previous state, the previous action. Then is the state that would have followed had the agent taken the noop action, , instead of .
However, in the presence of a subagent, the agent merely has to always take the action . In that case, (in a deterministic setting[1]), and the penalty collapses to nothing.
This leaves the subagent free to maximise (the positive reward for agent and subagent), without any restrictions beyond making sure that the agent can always take the action.
Non-indexical impact penalty
The twenty billion questions is an example of a non-indexical impact penalty. The impact penalty is defined as the difference in expected value of many variables, between the current state and the baseline state.
With the inaction baseline state, is the state the system would have been in, had the agent always done nothing/had not been turned on. Thus the definition of is independent of the agent's actions, so the inaction baseline does not collapse like the stepwise inaction baseline does.
What about subagents? Well, since the impact penalty is non-indexical, a subagent does not get around it. It matters not whether the subagent is independent, or an extension of the agent: the impact penalty remains.
In a non-deterministic setting, becomes a mix of an impact penalty and a measure of environment stochasticity. ↩︎
The inaction rollouts effectively transforms the stepwise inaction baseline into an inaction baseline (starting from the moment the subagent is created; thus the agent has a bit more control than in a true inaction baseline).
Therefore the results on the inaction baseline apply ( https://www.lesswrong.com/s/iRwYCpcAXuFD24tHh/p/M9aoMixFLf8JFLRaP ).
This means that restrictions on increased power for the agent ("make sure you never have the power to increase the rewards") become restrictions on the actual policy followed for the subagent ("make sure you never increase these rewards").
Roughly, attainable utility becomes twenty billion questions.
For the original example, this means that the agent cannot press the red button nor gain the ability to teleport. But while the subagent cannot press the red button, it can gain the ability to teleport.