Tyrrell_McAllister comments on UDT agents as deontologists - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (109)
How important is it that the list {P_i} be finite? If P_i is one of the programs in our initial list that we're uncertain about, couldn't there be infinitely many alternative programs P_i1, P_i2, . . . behind whatever we know about P_i?
I was thinking that incomplete knowledge about the P_i could be captured (within the formalism) with the mathematical intuition function. (Though it would then make less sense to call it a specifically mathematical intuition.)
I've added a description of UDT1.1 to my pdf.
In principle, it doesn't matter, because you can represent a countable list of programs as a single program that takes an extra parameter (but then you'll need to be more careful about the notion of "execution histories"), and more generally you can just include all possible programs in the list and express the level to which you care about the specific programs in the way mathematical intuition ranks their probability and the way utility function ranks their possible semantics.
On execution histories: note that a program is a nice finite inductive definition of how that program behaves, while it's unclear what an "execution history" is, since it's an infinite object and so it needs to be somehow finitely described. Also, if, as in the example above you have the world program taking parameters (e.g. a universal machine that takes a Goedel number of a world program as parameter), you'll have different executions depending on parameter. But if you see a program as a set of axioms for a logical theory defining the program's behavior, then execution histories can just be different sets of axioms defining program's behavior in a different way. These different sets of axioms could describe the same theories, or different theories, and can include specific facts about what happens during program execution on so and so parameters. Equivalence of such theories will depend on what you assume about the agent (i.e. if you add different assumptions about the agent to the theories, you get different theories, and so different equivalences), which is what mathematical intuition is trying to estimate.
It's not accurate to describe strategies as mappings f: X->Y. A strategy can be interactive: it takes input, produces an output, and then environment can prepare another input depending on this output, and so on. Think normalization in lambda calculus. So, the agent's strategy is specified by a program, but generally speaking this program is untyped.
Let's assume that there is a single world program, as described here. Then, if A is the agent's program known to the agent, B is one possible strategy for that program, given in form of a program, X is the world program known to the agent, and Y is one of the possible world execution histories of X given that A behaves like B, again given in form of a program, then mathematical intuition M(B,Y) returns the probability that the statement (A~B => X~Y) is true, where A~B stands for "A behaves like B", and similarly for X and Y. (This taps into the ambient control analysis of decision theory.)
I'm following this paragraph from Wei Dai's post on UDT1.1:
So, "input/output mappings" is Wei Dai's language. Does he not mean mappings between the set of possible inputs and the set of possible outputs?
It seems to me that this could be captured by the right function f: X -> Y. The set I of input-output mappings could be a big collections of GLUTs. Why wouldn't that suffice for Wei Dai's purposes?
ETA: And it feels weird typing out "Wei Dai" in full all the time. But the name looks like it might be Asian to me, so I don't know which part is the surname and which is the given name.
I've been wondering why people keep using my full name around here. Yes, the name is Chinese, but since I live in the US I follow the given-name-first convention. Feel free to call me "Wei".
No, you can't represent an interactive strategy by a single input to output mapping. That post made a step in the right direction, but stopped short of victory :-). But I must admit, I forgot about that detail in the second post, so you've correctly rendered Wei's algorithm, although using untyped strategies would further improve on that.
Why not?
BTW, in UDT1.1 (as well as UDT1), "input" consists of the agent's entire memory of the past as well as its current perceptions. Thought I'd mention that in case there's a misunderstanding there.
... okay, this question allowed me to make a bit of progress. Taking as a starting point the setting of this comment (that we are estimating the probability of (A~B => X~Y) being true, where A and X are respectively agent's and environment's programs, B and Y programs representing agent's strategy and outcome for environment), and the observations made here and here, we get a scheme for local decision-making.
Instead of trying to decide the whole strategy, we can just decide the local action. Then, the agent program, and "input" consisting of observations and memories, together make up the description of where the agent is in the environment, and thus where its control will be applied. The action that the agent considers can then be local, just something the agent does at this very moment, and the alternatives for this action are alternative statements about the agent: thus, instead of considering a statement A~B for agent's program A and various whole strategies B, we consider just predicates like action1(A) and action2(A) which assert A to choose action 1 or action 2 in this particular situation, and which don't assert anything else about its behavior in other situations or on other counterfactuals. Taking into account other actions that the agent might have to make in the past or in the future happens automatically, because the agent works with complete description of environment, even if under severe logical uncertainty. Thus, decision-making happens "one bit at a time", and the agent's strategy mostly exists in the environment, not under in any way direct control by the agent, but still controlled in the same sense everything in the environment is.
Thus, in the simplest case of a binary local decision, mathematical intuition would only take as explicit argument a single bit, which indicates what assertion is being made about [agent's program together with memory and observations], and that is all. No maps, no untyped strategies.
This solution was unavailable to me when I thought about explicit control, because the agent has to coordinate with itself, rely on what it can in fact decide in other situations and not what it should optimally decide, but it's a natural step in the setting of ambient control, because the incorrect counterfactuals are completely banished out of consideration, and environment describes what the agent will actually do on other occasions.
Going back to the post explicit optimization of global strategy, the agent doesn't need to figure out the global strategy! Each of the agent copies is allowed to make the decision locally, while observing the other copy as part of the environment (in fact, it's the same problem as "general coordination problem" I described on the DT list, back when I was clueless about this approach).
Well, that was my approach in UDT1, but then I found a problem that UDT1 apparently can't solve, so I switched to optimizing over the global strategy (and named that UDT1.1).
Can you re-read explicit optimization of global strategy and let me know what you think about it now? What I called "logical correlation" (using Eliezer's terminology) seems to be what you call "ambient control". The point of that post was that it seems an insufficiently powerful tool for even two agents with the same preferences to solve the general coordination problem amongst themselves, if they only explicitly optimize the local decision and depend on "logical correlation"/"ambient control" to implicitly optimize the global strategy.
If you think there is some way to get around that problem, I'm eager to hear it.
So far as I can see, your mistake was assuming "symmetry", and dropping probabilities. There is no symmetry, only one of the possibilities is what will actually happen, and the other (which I'm back to believing since the last post on DT list) is inconsistent, though you are unlikely to be able to actually prove any such inconsistency. You can't say that since (S(1)=A => S(2)=B) therefore (S(1)=B => S(2)=A). One of the counterfactuals is inconsistent, so if S(1) is in fact A, then S(1)=B implies anything. But what you are dealing with are probabilities of these statements (which possibly means proof search schemes trying to prove these statements and making a certain number of elementary assumptions, the number that works as the length of programs in universal probability distribution). These probabilities will paint a picture of what you expect the other copy to do, depending on what you do, and this doesn't at all have to be symmetric.
If there is to be no symmetry between "S(1)=A => S(2)=B" and "S(1)=B => S(2)=A", then something in the algorithm has to treat the two cases differently. In UDT1 there is no such thing to break the symmetry, as far as I can tell, so it would treat them symmetrically and fail on the problem one way or another. Probabilities don't seem to help since I don't see why UDT1 would assign them different probabilities.
If you have an idea how the symmetry might be broken, can you explain it in more detail?
I think that Vladimir is right if he is saying that UDT1 can handle the problem in your Explicit Optimization of Global Strategy post.
With your forbearance, I'll set up the problem in the notation of my write-up of UDT1.
There is only one world-program P in this problem. The world-program runs the UDT1 algorithm twice, feeding it input "1" on one run, and feeding it input "2" on the other run. I'll call these respective runs "Run1" and "Run2".
The set of inputs for the UDT1 algorithm is X = {1, 2}.
The set of outputs for the UDT1 algorithm is Y = {A, B}.
There are four possible execution histories for P:
E, in which Run1 outputs A, Run2 outputs A, and each gets $0.
F, in which Run1 outputs A, Run2 outputs B, and each gets $10.
G, in which Run1 outputs B, Run2 outputs A, and each gets $10.
H, in which Run1 outputs B, Run2 outputs B, and each gets $0.
The utility function U for the UDT1 algorithm is defined as follows:
U(E) = 0.
U(F) = 20.
U(G) = 20.
U(H) = 0.
Now we want to choose a mathematical intuition function M so that Run1 and Run2 don't give the same output. This mathematical intuition function does have to satisfy a couple of constraints:
For each choice of input X and output Y, the function M(X, Y, –) must be a normalized probability distribution on {E, F, G, H}.
The mathematical intuition needs to meet certain minimal standards to deserve its name. For example, we need to have M(1, B, E) = 0. The algorithm should know that P isn't going to execute according to E if the algorithm returns B on input 1.
But these constraints still leave us with enough freedom in how we set up the mathematical intuition. In particular, we can set
M(1, A, F) = 1, and all other values of M(1, A, –) equal to zero;
M(1, B, H) = 1, and all other values of M(1, B, –) equal to zero;
M(2, A, E) = 1, and all other values of M(2, A, –) equal to zero;
M(2, B, F) = 1, and all other values of M(2, B, –) equal to zero.
Thus, in Run1, the algorithm computes that, if it outputs A, then execution history F would transpire, so the agent would get utility U(F) = 20. But if Run1 were to output B, then H would transpire, yielding utility U(H) = 0. Therefore, Run1 outputs A.
Similarly, Run2 computes that its outputting A would result in E, with utility 0, while outputting B would result in F, with utility 20. Therefore, Run2 outputs B.
Hence, execution history F transpires, and the algorithm reaps $20.
ETA: And, as a bonus, this mathematical intuition really makes sense. For, suppose that we held everything equal, except that we do some surgery so that Run1 outputs B. Since everything else is equal, Run2 is still going to output B. And that really would put us in history H, just as Run1 predicted when it evaluated M(1, B, H) = 1.
The symmetry is broken by "1" being different from "2". The probabilities express logical uncertainty, and so essentially depend on what happens to be provable given finite resources and epistemic state of the agent, for which implementation detail matters. The asymmetry is thus hidden in mathematical intuition, and is not visible in the parts of UDT explicitly described.
...but on the other hand, you don't need the "input" at all, if decision-making is about figuring out the strategy. You can just have a strategy that produces the output, with no explicit input. The history of input can remain implicit in the agent's program, which is available anyway.
Good; that was my understanding.
Yes, that works too. On second thought, extracting output in this exact manner, while pushing everything else to the "input" allows to pose a problem specifically about the output in this particular situation, so as to optimize the activity for figuring out this output, rather than the whole strategy, of which right now you only need this aspect and no more.
Edit: Though, you don't need "input" to hold the rest of the strategy.
I was having trouble understanding what strategy couldn't be captured by a function X -> Y. After all, what could possibly determine the output of an algorithm other than its source code and whatever input it remembers getting on that particular run? Just to be clear, do you now agree that every strategy is captured by some function f: X -> Y mapping inputs to outputs?
One potential problem is that there are infinitely many input-output mappings. The agent can't assume a bound on the memory it will have, so it can't assume a bound on the lengths of inputs X that it will someday need to plug into an input-output mapping f.
Unlike the case where there are potentially infinitely many programs P1, P2, . . ., it's not clear to me that it's enough to wrap up an infinte set I of input-output mappings into some finite program that generates them. This is because the UDT1.1 agent needs to compute a sum for every element of I. So, if the set I is infinite, the number of sums to be computed will be infinite. Having a finite description of I won't help here, at least not with a brute-force UDT1.1 algorithm.
Any infinite thing in any given problem statement is already presented to you with a finite description. All you have to do is transform that finite description of an infinite object so as to get a finite description of a solution of your problem posed about the infinite object.
Right. I agree.
But, to make Wei's formal description of UDT1.1 work, there is a difference between
dealing with a finite description of an infinite execution history Ei and
dealing with a finite description of an infinite set I of input-output maps.
The difference is this: The execution histories only get fed into the utility function U and the mathematical intuition function (which I denote by M). These two functions are taken to be black boxes in Wei's description of UDT1.1. His purpose is not to explain how these functions work, so he isn't responsible for explaining how they deal with finite descriptions of infinite things. Therefore, the potential infinitude of the execution histories is not a problem for what he was trying to do.
In contrast, the part of the algorithm that he describes explicitly does require computing an expected utility for every input-output map and then selecting the input-output map that yielded the largest expected utility. Thus, if I is infinite, the brute-force version of UDT1.1 requires the agent to find a maximum from among infinitely many expected utilities. That means that the brute-force version just doesn't work in this case. Merely saying that you have a finite description of I is not enough to say in general how you are finding the maximum from among infinitely many expected utilities. In fact, it seems possible that there may be no maximum.
Actually, in both UDT1 and UDT1.1, there is a similar issue with the possibility of having infinitely many possible execution-history sequences <E1, E2, . . .>. In both versions of UDT, you have to perform a sum over all such sequences. Even if you have a finite description of the set E of such sequences, a complete description of UDT still needs to explain how you are performing the sum over the infinitely many elements of the set. In particular, it's not obvious that this sum is always well-defined.
...but the action could be a natural number, no? It's entirely OK if there is no maximum - the available computational resources then limit how good a strategy the agent manages to implement ("Define as big a natural number as you can!"). The "algorithm" is descriptive, it's really a definition of optimality of a decision, not specification of how this decision is to be computed. You can sometimes optimize infinities away, and can almost always find a finite approximation that gets better with more resources and ingenuity.