cousin_it comments on Subjective anticipation as a decision process - Less Wrong

3 Post author: Stuart_Armstrong 08 February 2011 11:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 08 February 2011 04:16:32PM *  0 points [-]

With UDT utility function, you still have a separate object representing the probability distribution over possible worlds, it's not part of the utility function.

No, in the formalism of Wei's original post it's all one giant object which is not necessarily decomposable in the way you suggest. But this is probably splitting hairs.

Tentatively agree with your last paragraph, but need to understand more.

Comment author: Vladimir_Nesov 08 February 2011 04:39:09PM *  0 points [-]

Nah, in the formalism of Wei's original post it's all one giant object.

It doesn't read this way to me. From the post:

More generally, we can always represent your preferences as a utility function on vectors of the form <E1, E2, E3, …> where E1 is an execution history of P1, E2 is an execution history of P2, and so on. [...]

When it receives an input X, it looks inside the programs P1, P2, P3, ..., and uses its "mathematical intuition" to form a probability distribution P_Y over the set of vectors <E1, E2, E3, …> for each choice of output string Y. Finally, it outputs a string Y* that maximizes the expected utility Sum P_Y(<E1, E2, E3, …>) U(<E1, E2, E3, …>).

U is still utility without probability, and probabilities come from "mathematical intuition", which is separate from utility-assignment, which is what I said:

you still have a separate object representing the probability distribution over possible worlds, it's not part of the utility function

Comment author: cousin_it 08 February 2011 07:44:21PM *  0 points [-]

Wha? The probability distribution given by math intuition isn't part of the problem statement, it's part of the solution. We already know how to infer it from the utility function in simple cases, and the idea is that it should be inferrable in principle.

When I read your comments, I often don't understand what you understand and what you don't. For the benefit of onlookers I'll try to explain the idea again anyway.

A utility function defined on vectors of execution histories may be a weighted sum of utility functions on execution histories, or it may be something more complex. For example, you may care about the total amount of chocolate you get in world-programs P1 and P2 combined. This corresponds to a "prior probability distribution" of 50/50 between the two possible worlds, if you look at the situation through indexical-uncertainty-goggles instead of UDT-goggles. Alternatively you may care about the product of the amounts of chocolate you get in P1 and P2, which isn't so easy to interpret as indexical uncertainty.

Comment author: Vladimir_Nesov 08 February 2011 10:59:41PM 0 points [-]

When you expect almost complete logical transparency, mathematical intuition won't specify anything more than the logical axioms. But where you expect logical uncertainty, the probabilities given by mathematical intuition play the role analogous to that of prior distribution, with expected utilities associated with specific execution histories taken through another expectation according to probabilities given by mathematical intuition. I agree that to the extent mathematical intuition doesn't play a role in decision-making, UDT utilities are analogous to expected utility, but in fact it plays that role, and it's more natural to draw the analogy between the informal notion of possible worlds and execution histories rather than between the possible worlds and world-programs. See also this comment.