Vladimir_Nesov comments on UDT agents as deontologists - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (109)
To the agent's builders.
ETA: I make that clear later in the post, but I'll add it to the intro paragraph.
I'm not sure what you mean. What I'm describing as coded into the agent "from birth" is Wei Dai's function P, which takes an output string Y as its argument (using subscript notation in his post).
ETA: Sorry, that is not right. To be more careful, I mean the "mathematical intuition" that takes in an input X and returns such a function P. But P isn't controlled by the agent's decisions.
ETA2: Gah. I misremembered how Wei Dai used his notation. And when I went back to the post to answer your question, I skimmed to quickly and misread.
So, final answer, when I say that "the agent always cares about all possible worlds according to how probable those worlds seemed to the agent's builders when they wrote the agent's source code", I'm talking about the "preference vector" that Wei Dai denotes by "<E1, E2, . . . >" and which he says "defines its preferences on how those programs should run."
I took him to be thinking of these entries Ei as corresponding to probabilities because of his post What Are Probabilities, Anyway?, where he suggests that "probabilities represent how much I care about each world".
ETA3: Nope, this was another misreading on my part. Wei Dai does not say that <E1, E2, . . . > is a vector of preferences, or anything like that. He says that it is an input to a utility function U, and that utility function is what "defines [the agent's] preferences on how those programs should run". So, what I gather very tentatively at this point is that the probability of each possible world is baked into the utility function U.
The second question (edited in later) is more pressing: you can't postulate fixed probabilities of possible worlds, how the agent controls these probabilities is essential.
See my edit to my reply.