Tyrrell_McAllister comments on UDT agents as deontologists - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (109)
To the agent's builders.
ETA: I make that clear later in the post, but I'll add it to the intro paragraph.
I'm not sure what you mean. What I'm describing as coded into the agent "from birth" is Wei Dai's function P, which takes an output string Y as its argument (using subscript notation in his post).
ETA: Sorry, that is not right. To be more careful, I mean the "mathematical intuition" that takes in an input X and returns such a function P. But P isn't controlled by the agent's decisions.
ETA2: Gah. I misremembered how Wei Dai used his notation. And when I went back to the post to answer your question, I skimmed to quickly and misread.
So, final answer, when I say that "the agent always cares about all possible worlds according to how probable those worlds seemed to the agent's builders when they wrote the agent's source code", I'm talking about the "preference vector" that Wei Dai denotes by "<E1, E2, . . . >" and which he says "defines its preferences on how those programs should run."
I took him to be thinking of these entries Ei as corresponding to probabilities because of his post What Are Probabilities, Anyway?, where he suggests that "probabilities represent how much I care about each world".
ETA3: Nope, this was another misreading on my part. Wei Dai does not say that <E1, E2, . . . > is a vector of preferences, or anything like that. He says that it is an input to a utility function U, and that utility function is what "defines [the agent's] preferences on how those programs should run". So, what I gather very tentatively at this point is that the probability of each possible world is baked into the utility function U.
Do you see that these E's are not intended to be interpreted as probabilities here, and so "probabilities of possible worlds are fixed at the start" remark at the beginning of your post is wrong?
Yes.
I realize that my post applies only to the kind of UDT agent that Wei Dai talks about when he discusses what probabilities of possible worlds are. See the added footnote.
It's still misinterpretation of Wei Dai's discussion of probability. What you described is not UDT, and not even a decision theory: say, what U(<E1,E2,...>) is for? It's not utility of agent's decision. When Wei Dai discusses probability in the post you linked, he still means it in the same sense as is used in decision theories, but makes informal remarks about what those values, say, P_Y(...), seem to denote. From the beginning of the post:
Weights assigned to world-histories, not worlds. Totally different. (Although Wei Dai doesn't seem to consistently follow the distinction in terminology himself, it begins to matter when you try to express things formally.)
Edit: this comment is wrong, see correction here.
I have added a link (pdf) to a complete description of what a UDT algorithm is. I am confident that there are no "misinterpretations" there, but I would be grateful if you pointed out any that you perceive.
I believe it is an accurate description of UDT as presented in the original post, although incomplete knowledge about P_i can be accommodated without changing the formalism, by including all alternatives (completely described this time) enabled by available knowledge about the corresponding world programs, in the list {P_i} (which is the usual reading of "possible world"). Also note that in this post Wei Dai corrected the format of the decisions from individual input/output instances to global strategy-selection.
How important is it that the list {P_i} be finite? If P_i is one of the programs in our initial list that we're uncertain about, couldn't there be infinitely many alternative programs P_i1, P_i2, . . . behind whatever we know about P_i?
I was thinking that incomplete knowledge about the P_i could be captured (within the formalism) with the mathematical intuition function. (Though it would then make less sense to call it a specifically mathematical intuition.)
I've added a description of UDT1.1 to my pdf.
In principle, it doesn't matter, because you can represent a countable list of programs as a single program that takes an extra parameter (but then you'll need to be more careful about the notion of "execution histories"), and more generally you can just include all possible programs in the list and express the level to which you care about the specific programs in the way mathematical intuition ranks their probability and the way utility function ranks their possible semantics.
On execution histories: note that a program is a nice finite inductive definition of how that program behaves, while it's unclear what an "execution history" is, since it's an infinite object and so it needs to be somehow finitely described. Also, if, as in the example above you have the world program taking parameters (e.g. a universal machine that takes a Goedel number of a world program as parameter), you'll have different executions depending on parameter. But if you see a program as a set of axioms for a logical theory defining the program's behavior, then execution histories can just be different sets of axioms defining program's behavior in a different way. These different sets of axioms could describe the same theories, or different theories, and can include specific facts about what happens during program execution on so and so parameters. Equivalence of such theories will depend on what you assume about the agent (i.e. if you add different assumptions about the agent to the theories, you get different theories, and so different equivalences), which is what mathematical intuition is trying to estimate.
It's not accurate to describe strategies as mappings f: X->Y. A strategy can be interactive: it takes input, produces an output, and then environment can prepare another input depending on this output, and so on. Think normalization in lambda calculus. So, the agent's strategy is specified by a program, but generally speaking this program is untyped.
Let's assume that there is a single world program, as described here. Then, if A is the agent's program known to the agent, B is one possible strategy for that program, given in form of a program, X is the world program known to the agent, and Y is one of the possible world execution histories of X given that A behaves like B, again given in form of a program, then mathematical intuition M(B,Y) returns the probability that the statement (A~B => X~Y) is true, where A~B stands for "A behaves like B", and similarly for X and Y. (This taps into the ambient control analysis of decision theory.)
I'm following this paragraph from Wei Dai's post on UDT1.1:
So, "input/output mappings" is Wei Dai's language. Does he not mean mappings between the set of possible inputs and the set of possible outputs?
It seems to me that this could be captured by the right function f: X -> Y. The set I of input-output mappings could be a big collections of GLUTs. Why wouldn't that suffice for Wei Dai's purposes?
ETA: And it feels weird typing out "Wei Dai" in full all the time. But the name looks like it might be Asian to me, so I don't know which part is the surname and which is the given name.
I've been wondering why people keep using my full name around here. Yes, the name is Chinese, but since I live in the US I follow the given-name-first convention. Feel free to call me "Wei".
No, you can't represent an interactive strategy by a single input to output mapping. That post made a step in the right direction, but stopped short of victory :-). But I must admit, I forgot about that detail in the second post, so you've correctly rendered Wei's algorithm, although using untyped strategies would further improve on that.
I gave an accurate definition of Wei Dai's utility function U. As you note, I did not say what U is for, because I was not giving a complete recapitulation of UDT. In particular, I did not imply that U(<E1,E2,...>) is the utility of the agent's decision.
(I understand that U(<E1,E2,...>) is the utility that the agent assigns to having program Pi undergo execution history Ei for all i. I understand that, here, Ei is a complete history of what the program Pi does. However, note that this does include the agent's chosen action if Pi calls the agent as a subroutine. But none of this was relevant to the point that I was making, which was to point out that my post only applies to UDT agents that use a particular kind of function U.)
It's looking to me like I'm following one of Wei Dai's uses of the word "probability", and you're following another. You think that Wei Dai should abandon the use of his that I'm following. I am not seeing that this dispute is more than semantics at this point. That wasn't the case earlier, by the way, where I really did misunderstand where the probabilities of possible worlds show up in Wei Dai's formalism. I now maintain that these probabilities are the values I denoted by pr(Pi) when U has the form I describe in the footnote. Wei Dai is welcome to correct me if I'm wrong.
I agree with this description now. I apologize for this instance and a couple others; stayed up too late last night, and negative impression about your post from the other mistakes primed me to see mistakes where everything is correct.
It was a little confusing, because the probabilities here have nothing to do with the probabilities supplied by mathematical intuition, while the probabilities of mathematical intuition are still in play. In UDT, different world-programs correspond to observational and indexical uncertainty, while different execution strategies to logical uncertainty about a specific world program. Only where there is essentially no indexical uncertainty, it makes sense to introduce probabilities of possible worlds, factorizing the probabilities otherwise supplied by mathematical intuition together with those describing logical uncertainty.
Thanks for the apology. I accept responsibility for priming you with my other mistakes.
I hadn't thought about the connection to indexical uncertainty. That is food for thought.
Very very wrong. The world program P (or what it does, anyway) is the only thing that's actually controlled in this control problem statement (more generally, a list <P1, P2, P3, ...> of programs, which could equivalently be represented by one program parametrized by an integer).
Edit: I misinterpreted the way Tyrrell used "P", correction here.
Here is the relevant portion of Wei Dai's post:
If I am reading him correctly, he uses the letter "P" in two different ways. In one use, he writes Pi, where i is an integer, to denote a program. In the other use, he writes P_Y, where Y is an output vector, to denote a probability distribution.
I was referring to the second use.
Okay, the characterization of P_Y seems right. For my reaction I blame the prior.
Returning to the original argument,
P_Y is not a description of probabilities of possible worlds conceived by agent's builder, it's something produced by "mathematical intuition module" for a given output Y (or, strategy Y if you incorporate the later patch to UDT).
You are right here. Like you, I misremembered Wei Dai's notation. See my last (I hope) edit to that comment.
I would appreciate it if you edited your comment where you say that I was "very very wrong" to say that P isn't controlled by the agent's decisions.
It's easier to have a linear discussion, rather than trying to patch everything by reediting it from the start (just saying, you are doing this for the third time to that poor top-level comment). You've got something wrong, then I've got something wrong, the errors were corrected as the discussion developed, moving on. The history doesn't need to be corrected. (I insert corrections to comments this way, without breaking the sequence.)
Thank you for the edit.
The second question (edited in later) is more pressing: you can't postulate fixed probabilities of possible worlds, how the agent controls these probabilities is essential.
See my edit to my reply.