Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vladimir_Nesov comments on Bayesian Utility: Representing Preference by Probability Measures - Less Wrong

10 Post author: Vladimir_Nesov 27 July 2009 02:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 27 July 2009 10:26:28PM 1 point [-]

Any action can be identified with a set of outcomes consistent with the action. See my reply to JGWeissman.

Is the example after mixing unclear? In what way?

Comment author: cousin_it 27 July 2009 10:33:20PM *  2 points [-]

Yes, that's true but makes your conclusion a bit misleading because not all sets of outcomes correspond to possible actions. It can easily happen that any preference ordering on actions is rationalizable by tweaking utility under a given prior.

The math in the example is clear enough, I just don't understand the motivation for it. If you reduce everything to a preference relation on subsets of a sigma algebra, it's trivially true that you can tweak it with any monotonic function, not just mixing p and q with alpha and beta. So what.

Comment author: Vladimir_Nesov 27 July 2009 10:47:54PM 0 points [-]

It can also happen that the prior happens to be the right one, but it isn't guaranteed. This is a red flag, a possible flaw, something to investigate.

The question of which events are "possible actions" is a many-faceted one, and solving this problem "by definition" doesn't work. For example, if you can pick the best strategy, it doesn't matter what the preference order says for all events except the best strategy, even what it says for "possible actions" which won't actually happen.

Strictly speaking, I don't even trust (any) expected utility (and so Bayesian math) to represent preference. Any solution has to also work in a discrete deterministic setting.

Comment author: cousin_it 28 July 2009 07:45:26AM *  1 point [-]

It seems to me that you're changing the subject, or maybe making inferential jumps that are too long for me.

The information to determine which events are possible actions is absent from your model. You can't calculate it within your setting, only postulate.

If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can't tell), then I don't understand how it brings us closer to that goal.

Comment author: Vladimir_Nesov 28 July 2009 11:38:18AM 2 points [-]

The Hofstadter's Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter's Law of Inferential Distance.

Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.

Comment author: JGWeissman 27 July 2009 10:42:31PM 0 points [-]

Expected utility is usually written for actions, but it can be written as in the post as well, it's formally equivalent.

However, the ratios of the conditional probabilities of those outcomes, given that you take a certain action, will not always equal the rations of the unconditional probabilities, as in your formula.