'Tsvi Benson-Tilsen'
title: Training Garrabrant inductors to predict counterfactuals
...
The ideas in this post are due to Scott, me, and possibly others. Thanks
to Nisan Stiennon for working through the details of an earlier version
of this post with me.
We give a schema where each agent selects a single action with no
observations. Roughly, AUnn learns how to get what it wants
by computing what the AUii with i<n did, and also what
various traders predicted would happen, given each action that the
AUii could have taken. The traders are rewarded for
predicting what (counterfactually) would be the case in terms of
bitstrings, and then their predictions are used to evaluate expected
utilities of actions currently under consideration. This requires
modifying our UGI and the traders involved to take a possible action as
input, so that we get a prediction (a “counterfactual distribution over
worlds”) for each action.
More precisely, define AUnn:= let ^Pn:=Counterfactuals(n)returnargmaxa∈Act^En[a](Un)
where ^En[a](Un):=∑σ∈2n^Pn[a](σ)⋅Un(σ).
Here ^Pn is a dictionary of belief states, one for
each action, defined by the function
Counterfactuals:N+→(Act→Δ(2ω))
using recursion as follows:
<span>input:</span> n∈N+
<span>output:</span> A dictionary of belief states
P:Act→Δ(2ω)
<span>initialize: </span>histn−1← array
of belief states of length n−1
Here, we use a modified form of traders and of the
TradingFirm′ function from the LIA algorithm given in the
logical induction paper. In detail, let traders have the type
N+×Act→trading strategy.
On day n, traders are passed a possible action a∈Act, which we interpret as “an action that AUnn
might take”. Then each trader returns a trading strategy, and those
trading strategies are used as usual to construct a belief state
P[a]. We pass to TradingFirm′ the full history
a≤n−1 of the actions taken by the previous AUii,
since TradingFirm′ calls the Budgeter function; that
function requires computing the traders’s previous trading strategies,
which require passing the ai as arguments.
Thus, traders are evaluated based on the predictions they made about
logic when given the actual action an as input. In particular, the
sequence (Pn[an]) is a UGI over the class of efficient
traders given access to the actual actions taken by the agent
AUnn.
This scheme probably suffers from spurious counterfactuals, but feels
like a natural baseline proposal.
author:
The ideas in this post are due to Scott, me, and possibly others. Thanks to Nisan Stiennon for working through the details of an earlier version of this post with me.
Github pdf: https://github.com/tsvibt/public-pdfs/blob/master/decision-theory/training-counterfactuals/main.pdf
We will use the notation and definitions given in https://github.com/tsvibt/public-pdfs/blob/master/decision-theory/notation/main.pdf. Let ¯¯¯P be a universal Garrabrant inductor and let ¯¯¯¯U:N+→Expr(2ω→R) be a sequence of utility function machines. We will define an agent schema (AUnn).
We give a schema where each agent selects a single action with no observations. Roughly, AUnn learns how to get what it wants by computing what the AUii with i<n did, and also what various traders predicted would happen, given each action that the AUii could have taken. The traders are rewarded for predicting what (counterfactually) would be the case in terms of bitstrings, and then their predictions are used to evaluate expected utilities of actions currently under consideration. This requires modifying our UGI and the traders involved to take a possible action as input, so that we get a prediction (a “counterfactual distribution over worlds”) for each action.
More precisely, define AUnn:= let ^Pn:=Counterfactuals(n)returnargmaxa∈Act^En[a](Un)
where ^En[a](Un):=∑σ∈2n^Pn[a](σ)⋅Un(σ). Here ^Pn is a dictionary of belief states, one for each action, defined by the function Counterfactuals:N+→(Act→Δ(2ω)) using recursion as follows:
<span>input:</span> n∈N+
<span>output:</span> A dictionary of belief states P:Act→Δ(2ω)
<span>initialize: </span>histn−1← array of belief states of length n−1
<span>for</span> i≤n−1:
^Pi←Counterfactuals(i)
ai←argmaxa∈Act∑σ∈2i^Pi[a](σ)⋅Ui(σ)
histn−1[i]←^Pi[ai]
<span>for </span> (a:Act):
P[a]←MarketMaker(histn−1,TradingFirm′(a,a≤n−1,histn−1))
<span>return </span> P
Here, we use a modified form of traders and of the TradingFirm′ function from the LIA algorithm given in the logical induction paper. In detail, let traders have the type N+×Act→trading strategy. On day n, traders are passed a possible action a∈Act, which we interpret as “an action that AUnn might take”. Then each trader returns a trading strategy, and those trading strategies are used as usual to construct a belief state P[a]. We pass to TradingFirm′ the full history a≤n−1 of the actions taken by the previous AUii, since TradingFirm′ calls the Budgeter function; that function requires computing the traders’s previous trading strategies, which require passing the ai as arguments.
Thus, traders are evaluated based on the predictions they made about logic when given the actual action an as input. In particular, the sequence (Pn[an]) is a UGI over the class of efficient traders given access to the actual actions taken by the agent AUnn.
This scheme probably suffers from spurious counterfactuals, but feels like a natural baseline proposal.