You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Houshalter comments on Median utility rather than mean? - Less Wrong Discussion

6 Post author: Stuart_Armstrong 08 September 2015 04:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Houshalter 09 September 2015 09:22:01PM 0 points [-]

Your example is difficult to follow, but I think you are missing the point. If there is only one decision, then it's actions can't be inconsistent. By choosing a policy only once - one that maximizes it's desired probability distribution of utility outcomes - it's not money pumpable, and it's not inconsistent.

Now by itself it still sucks because we probably don't want to maximize for the best median future. But it opens up the door to more general policies for making decisions. You no longer have to use expected utility if you want to be consistent. You can choose a tradeoff between expected utility and median utility (see my top level comment), or a different algorithm entirely.

Comment author: AlexMennen 09 September 2015 11:52:42PM *  0 points [-]

If there is only one decision point in each possible world, then it is impossible to demonstrate inconsistency within a world, but you can still be inconsistent between different possible worlds.

Edit: as V_V pointed out, the VNM framework was designed to handle isolated decisions. So if you think that considering an isolated decision rather than multiple decisions removes the motivation for the independence axiom, then you have misunderstood the motivation for the independence axiom.

Comment author: Stuart_Armstrong 10 September 2015 08:46:45AM 1 point [-]

So if you think that considering an isolated decision rather than multiple decisions removes the motivation for the independence axiom, then you have misunderstood the motivation for the independence axiom.

I understand the two motivations for the independence axiom, and the practical one ("you can't be money pumped") is much more important that the theoretical one ("your system obeys this here philosophically neat understanding of irrelevant information").

But this is kind of a moot point, because humans don't have utility functions. And therefore we will have to construct them. And the process of constructing them is almost certainly going to depend on facts about the world, making the construction process almost certainly inconsistent between different possible worlds.

Comment author: AlexMennen 10 September 2015 11:00:40PM 0 points [-]

And the process of constructing them is almost certainly going to depend on facts about the world

It shouldn't. If your preferences among outcomes depend on what options are actually available to you, then I don't see how you can justify claiming to have preferences among outcomes, as opposed to tendencies to make certain choices.

Comment author: Stuart_Armstrong 11 September 2015 08:37:05AM 1 point [-]

It shouldn't.

Then define me a process that takes people's current mess of preferences, makes these into utility functions, and, respecting bounded rationality, is independent of options available in the real world. Even then, we have the problem that this mess of preferences is highly dependent on real world experiences in the first place.

I don't see how you can justify claiming to have preferences among outcomes, as opposed to tendencies to make certain choices.

If I always go left at a road, I have tendency to make certain choices. If I have a full model of the entire universe with labelled outcomes ranked on a utility function, and use it with unbounded rationality to make decisions, I have preferences among outcomes. The extremes are clear.

I feel that a bounded human being with a crude mental model that is trying to achieve some goal, imperfectly (because of ingrained bad habits, for instance) is better described as having preferences among outcomes. You could argue that they have mere tendencies, but this seems to stretch the term. But in any case, this is a simple linguistic dispute. Real human beings cannot achieve independence.

Comment author: AlexMennen 11 September 2015 05:02:01PM 0 points [-]

Then define me a process that takes people's current mess of preferences, makes these into utility functions, and, respecting bounded rationality, is independent of options available in the real world.

Define me a process with all those properties except the last one. If you can't do that either, it's not the last constraint that is to blame for the difficulty.

Even then, we have the problem that this mess of preferences is highly dependent on real world experiences in the first place.

Yes, different agents have different preferences. The same agent shouldn't have its preferences change when the available outcomes do.

If I have a full model of the entire universe with labelled outcomes ranked on a utility function, and use it with unbounded rationality to make decisions, I have preferences among outcomes.

If you are neutral between .4A+.6C and .4B+.6C, then you don't have a very good claim to preferring A over B.

Comment author: Stuart_Armstrong 14 September 2015 11:28:31AM 0 points [-]

Define me a process with all those properties except the last one.

Well, there's my old idea here: http://lesswrong.com/lw/8qb/cevinspired_models/ . I don't think it's particularly good, but it does construct a utility function, and might be doable with good enough models or a WBE. More broadly, there's the general "figure out human preferences from their decisions and from hypothetical questions and fit a utility function to it", which we can already do today (see "inverse reinforcement learning"); we just can't do it well enough, yet, to get something generally safe at the other end.

None of these ideas have independent variants (not technically true; I can think of some independent versions of them, but they're so ludicrously unsafe in our world that we'd rule them out immediately; thus, this would be a non-independent process).

If you are neutral between .4A+.6C and .4B+.6C, then you don't have a very good claim to preferring A over B.

?

If I actually do prefer A over B (and my behaviour reflects that in (1- ɛ)A+ ɛC versus (1-ɛ)B+ ɛC cases), then I have an extremely good claim to preferring A over B, and an extremely poor claim to independence.

Comment author: AlexMennen 14 September 2015 06:07:16PM 0 points [-]

Well, there's my old idea here: http://lesswrong.com/lw/8qb/cevinspired_models/ . I don't think it's particularly good

I assumed accuracy was implied by "making a mess of preferences into a utility function".

More broadly, there's the general "figure out human preferences from their decisions and from hypothetical questions and fit a utility function to it", which we can already do today (see "inverse reinforcement learning"); we just can't do it well enough, yet, to get something generally safe at the other end.

I'm somewhat skeptical of that strategy for learning utility functions, because the space of possible outcomes is extremely high-dimensional, and it may be difficult to test extreme outcomes because the humans you're trying to construct a utility function for might not be able to understand them. But perhaps this objection doesn't get to the heart of the matter, and I should put it aside for now.

None of these ideas have independent variants

I am admittedly not well-versed in inverse reinforcement learning, but this is a perplexing claim. Except for a few people like you suggesting alternatives, I've only ever heard "utility function" used to refer to a function you maximize the expected value of (if you're trying to handle uncertainty), or a function you just maximize the value of (if you're not trying to handle uncertainty). In the first case, we have independence. In the second case, the question of whether or not we obey independence doesn't really make sense. So if inverse reinforcement learning violates independence, then what exactly does it try to fit to human preferences?

If I actually do prefer A over B

Then if the only difference between two gambles is that one might give you A when the other might give you B, you'll take the one that might give you something you like instead of something you don't like.

Comment author: Stuart_Armstrong 15 September 2015 11:01:48AM 0 points [-]

I've only ever heard "utility function" used to refer to

To be clear, I am saying the process of constructing the utility function violates independence, not that subsequently maximising it does. Similarly, choosing a median-maximising policy P violates independence, but there is (almost certainly) a utility u such that maximising u is the same as following P.

Once the first choice is made, we have independence in both cases; before it is made, we have it in neither. The philosophical underpinning of independence in single decisions therefore seems very weak.

Comment author: AlexMennen 15 September 2015 05:08:30PM 0 points [-]

To be clear, I am saying the process of constructing the utility function violates independence

Feel free to tell me to shut up and learn how inverse reinforcement learning works before bothering you with such questions, if that is appropriate, but I'm not sure what you mean. Can you be more precise about what property you're saying inverse reinforcement learning doesn't have?

Comment author: Houshalter 10 September 2015 12:08:00AM 0 points [-]

It can't be inconsistent within a world no matter how many decisions points there are. If we agree it's not inconsistent, then what are you arguing against?

I don't care about the VNM framework. As you said, it is designed to be optimal for decisions made in isolation. Because we don't need to make decisions in isolation, we don't need to be constrained by it.

Comment author: AlexMennen 10 September 2015 12:29:28AM 0 points [-]

If we agree it's not inconsistent...

No. Inconsistency between different possible worlds is still inconsistency.

Because we don't need to make decisions in isolation, we don't need to be constrained by it.

The difference doesn't matter that much in practice. If there are multiple decision points, you can combine them into one by selecting a policy, or by considering them sequentially and using your beliefs about what your choices will be in the future to compute the expected utilities of the possible decisions available to you now. The reason that the VNM framework was designed for one-shot decisions is that it makes things simpler without actually constraining what it can be applied to.

Comment author: Houshalter 11 September 2015 12:01:04AM 0 points [-]

No. Inconsistency between different possible worlds is still inconsistency.

It's perfectly consistent in the sense that it's not money pumpable, and always makes the same decisions given the same information. It will make different decisions in different situations, given different information. But that is not inconsistent by an reasonable definition of "inconsistent".

The difference doesn't matter that much in practice.

It makes a huge difference. If you want to get the best median future, then you can't make decisions in isolation. You need to consider every possible decision you will have to make, and their probability. And choose a decision policy that selects the best median outcome.

Comment author: AlexMennen 11 September 2015 01:05:02AM 0 points [-]

It's perfectly consistent in the sense that it's not money pumpable, and always makes the same decisions given the same information.

As in my previous example (sorry about it being difficult to follow, though I'm not sure yet what I could say to clarify things), it is inconsistent in the sense that it can lead you to pay for probability distributions over outcomes that you could have achieved for free.

You need to consider every possible decision you will have to make, and their probability.

Right. As I just said, "you can... consider them sequentially and use your beliefs about what your choices will be in the future to compute the expected utilities of the possible decisions available to you now." (edited to fix grammar). This reduces iterated decisions to isolated decisions: you have certain beliefs about what you'll do in the future, and now you just have to make a decision on the issue facing you now.