Vladimir_Nesov comments on Bayesian Utility: Representing Preference by Probability Measures - Less Wrong

10 27 July 2009 02:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 27 July 2009 07:04:53PM 1 point [-]

As I said in the first phrase, this is but a "simple transformation of standard expected utility formula that I found conceptually interesting". I don't quite understand the second part of your comment (starting from "Since the probability...").

Comment author: 27 July 2009 07:32:20PM *  0 points [-]

I agree that it is an interesting transformation, but I think your conclusion ("No simple morality, no simple probability.") does not follow.

Comment author: 27 July 2009 07:39:35PM 2 points [-]

That argument says that if you pick a prior, you can't "patch" it to become an arbitrary preference by finding a fitting utility function. It's not particularly related to the shouldness/probability representation, and it isn't well-understood, but it's easy to demonstrate by example in this setting, and I think it's an interesting point as well, possibly worth exploring.

Comment author: 27 July 2009 09:56:50PM *  0 points [-]

The new version of the post still loses me at about the point where mixing comes in. (What's your motivation for introducing mixing at all?) I would've been happier if it went on about geometry instead of those huge inferential leaps at the end.

And JGWeissman is right, expected utility is a property of actions not outcomes which seems to make the whole post invalid unless you fix it somehow.

Comment author: 27 July 2009 10:26:28PM 1 point [-]

Any action can be identified with a set of outcomes consistent with the action. See my reply to JGWeissman.

Is the example after mixing unclear? In what way?

Comment author: 27 July 2009 10:33:20PM *  2 points [-]

Yes, that's true but makes your conclusion a bit misleading because not all sets of outcomes correspond to possible actions. It can easily happen that any preference ordering on actions is rationalizable by tweaking utility under a given prior.

The math in the example is clear enough, I just don't understand the motivation for it. If you reduce everything to a preference relation on subsets of a sigma algebra, it's trivially true that you can tweak it with any monotonic function, not just mixing p and q with alpha and beta. So what.

Comment author: 27 July 2009 10:47:54PM 0 points [-]

It can also happen that the prior happens to be the right one, but it isn't guaranteed. This is a red flag, a possible flaw, something to investigate.

The question of which events are "possible actions" is a many-faceted one, and solving this problem "by definition" doesn't work. For example, if you can pick the best strategy, it doesn't matter what the preference order says for all events except the best strategy, even what it says for "possible actions" which won't actually happen.

Strictly speaking, I don't even trust (any) expected utility (and so Bayesian math) to represent preference. Any solution has to also work in a discrete deterministic setting.

Comment author: 28 July 2009 07:45:26AM *  1 point [-]

It seems to me that you're changing the subject, or maybe making inferential jumps that are too long for me.

The information to determine which events are possible actions is absent from your model. You can't calculate it within your setting, only postulate.

If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can't tell), then I don't understand how it brings us closer to that goal.

Comment author: 28 July 2009 11:38:18AM 2 points [-]

The Hofstadter's Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter's Law of Inferential Distance.

Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.

Comment author: 27 July 2009 10:42:31PM 0 points [-]

Expected utility is usually written for actions, but it can be written as in the post as well, it's formally equivalent.

However, the ratios of the conditional probabilities of those outcomes, given that you take a certain action, will not always equal the rations of the unconditional probabilities, as in your formula.