by [anonymous]
7 min read7th Apr 201317 comments

10

Related: Pinpointing Utility

Let's go for lunch at the Hypothetical Diner; I have something I want to discuss with you.

We will pick our lunch from the set of possible orders, and we will recieve a meal drawn from the set of possible meals, O.

Speaking in general, each possible order has an associated probability distribution over O. The Hypothetical Diner takes care to simplify your analysis; the probability distribution is trivial; you always get exactly what you ordered.

Again to simplify your lunch, the Hypothetical Diner offers only two choices on the menu: the Soup, and the Bagel.

To then complicate things so that we have something to talk about, suppose there is some set M of ways other things could be that may affect your preferences. Perhaps you have sore teeth on some days.

Suppose for the purposes of this hypothetical lunch date that you are VNM rational. Shocking, I know, but the hypothetical results are clear: you have a utility function, U. The domain of the utility function is the product of all the variables that affect your preferences (which meal, and whether your teeth are sore): U: M x O -> utility.

In our case, if your teeth are sore, you prefer the soup, as it is less painful. If your teeth are not sore, you prefer the bagel, because it is tastier:

U(sore & soup) > U(sore & bagel)
U(~sore & soup) < U(~sore & bagel)

Your global utility function can be partially applied to some m in M to get an "object-level" utility function U_m: O -> utility. Note that the restrictions of U made in this way need not have any resemblance to each other; they are completely separate.

It is convenient to think about and define these restricted "utility function patches" separately. Let's pick some units and datums so we can get concrete numbers for our utilities:

U_sore(soup) = 1 ; U_sore(bagel) = 0
U_unsore(soup) = 0 ; U_unsore(bagel) = 1

Those are separate utility functions, now, so we could pick units and datum seperately. Because of this, the sore numbers are totally incommensurable to the unsore numbers. *Don't try to comapre them between the UF's or you will get type-poisoning. The actual numbers are just a straightforward encoding of the preferences mentioned above.

What if we are unsure about where we fall in M? That is, we won't know whether our teeth are sore until we take the first bite. That is, we have a probability distribution over M. Maybe we are 70% sure that your teeth won't hurt you today. What should you order?

Well, it's usually a good idea to maximize expected utility:

EU(soup) = 30%*U(sore&soup) + 70%*U(~sore&soup) = ???
EU(bagel) = 30%*U(sore&bagel) + 70%*U(~sore&bagel) = ???

Suddenly we need those utility function patches to be commensuarable, so that we can actually compute these, but we went and defined them separately, darn. All is not lost though, recall that they are just restrictions of a global utility function to particular soreness-circumstance, with some (positive) linear transforms, f_m, thrown in to make the numbers nice:

f_sore(U(sore&soup)) = 1 ; f_sore(U(sore&bagel)) = 0
f_unsore(U(~sore&soup)) = 0 ; f_unsore(U(~sore&bagel)) = 1

At this point, it's just a bit of clever function-inverting and all is dandy. We can pick some linear transform g to be canonical, and transform all the utility function patches into that basis. So for all m, we can get g(U(m & o)) by inverting the f_m and then applying g:

g.U(sore & x) = (g.inv(f_sore).f_sore)(U(sore & x))
= k_sore*U_sore(x) + c_sore
g.U(~sore & x) = (g.inv(f_unsore).f_unsore)(U(~sore & x))
= k_unsore*U_unsore(x) + c_unsore

(I'm using . to represent composition of those transforms. I hope that's not too confusing.)

Linear transforms are really nice; all the inverting and composing collapses down to a scale k and an offset c for each utility function patch. Now we've turned our bag of utility function patches into a utility function quilt! One more bit of math before we get back to deciding what to eat:

EU(x) = P(sore) *(k_sore *U_sore(x) + c_sore) +
(1-P(sore))*(k_unsore*U_unsore(x) + c_unsore)

Notice that the terms involving c_m do not involve x, meaning that the c_m terms don't affect our decision, so we can cancel them out and forget they ever existed! This is only true because I've implicitly assumed that P(m) does not depend on our actions. If it did, like if we could go to the dentist or take some painkillers, then it would be P(m | x) and c_m would be relevent in the whole joint decision.

We can define the canonical utility basis g to be whatever we like (among positive linear transforms); for example, we can make it equal to f_sore so that we can at least keep the simple numbers from U_sore. Then we throw all the c_ms away, because they don't matter. Then it's just a matter of getting the remaining k_ms.

Ok, sorry, those last few paragraphs were rather abstract. Back to lunch. We just need to define these mysterious scaling constants and then we can order lunch. There is only one left; k_unsore. In general there will be n-1, where n is the size of M. I think the easiest way to approach this is to let k_unsore = 1/5 and see what that implies:

g.U(sore & soup) = 1 ; g.U(sore & bagel) = 0
g.U(~sore & soup) = 0 ; g.U(~sore & bagel) = 1/5
EU(soup) = (1-P(~sore))*1 = 0.3
EU(bagel) = P(~sore)*k_unsore = 0.14
EU(soup) > EU(bagel)

After all the arithmetic, it looks like if k_unsore = 1/5, even if we expect you to have nonsore teeth with P(sore) = 0.3, we are unsure enough and the relative importance is big enough that we should play safe safe and go with the soup anyways. In general we would choose soup if P(~sore) < 1/(k_unsore+1), or equivalently, if k_unsore < (1-P(~sore)/P(~sore).

So k is somehow the relative importance of possible preference stuctures under uncertainty. A smaller k in this lunch example means that the tastiness of a bagel over a soup is small relative to the pain saved by eating the soup instead. With this intuition, we can see that 1/5 is a somewhat reasonable value for this scenario, and for example, 1 would not be, and neither would 1/20

What if we are uncertain about k? Are we simply pushing the problem up some meta-chain? It turns out that no, we are not. Because k is linearly related to utility, you can simply use its expected value if it is uncertain.

It's kind of ugly to have these k_m's and these U_m's, so we can just reason over the product K x M instead of M and K seperately. This is nothing weird, it just means we have more utility function patches (Many of which encode the exact same object-level preferences).

In the most general case, the utility function patches in KxM are the space of all functions O -> RR, with offset equivalence, but not scale equivalence (Sovereign utility functions have full linear-transform equivalence, but these patches are only equivalent under offset). Remember, though, that these are just restricted patches of a single global utility function.

So what is the point of all this? Are we just playing in the VNM sandbox, or is this result actually interesting for anything besides sore teeth?

Perhaps Moral/Preference Uncertainty? I didn't mention it until now because it's easier to think about lunch than a philosophical minefield, but it is the point of this post. Sorry about that. Let's conclude with everything restated in terms of moral uncertainty.

TL;DR:

If we have:

  1. A set of object-level outcomes O,

  2. A set of "epiphenomenal" (outside of O) 'moral' outcomes M,

  3. A probability distribution over M, possibly correlated with uncertainty about O, but not in a way that allows our actions to influence uncertainty over M (that is, assuming moral facts cannot be changed by your actions.),

  4. A utility function over O for each possible value of M, (these can be arbitrary VNM-rational moral theories, as long as they share the same object-level),

  5. And we wish to be VNM rational over whatever uncertainty we have

then we can quilt together a global utility function U: (M,K,O) -> RR where and U(m,k,o) = k*U_m(o) so that EU(o) is the sum of all P(m)*E(k | m)*U_m(o)

Somehow this all seems like legal VNM.

Implications

So. Just the possible object-level preferences and a probability distribution over those is not enough to define our behaviour. We need to know the scale for each so we know how to act when uncertain. This is analogous to the switch from ordinal preferences to interval preferences when dealing with object-level uncertainty.

Now we have a well-defined framework for reasoning about preference uncertainty, if all our possible moral theories are VNM rational, moral facts are immutable, and we have a joint probability distribution over OxMxK.

In particular, updating your moral beliefs upon hearing new arguments is no longer a mysterious dynamic, it is just a bayesian update over possible moral theories.

This requires a "moral prior" that corellates moral outcomes and their relative scales to the observable evidence. In the lunch example, we implicitly used such a moral prior to update on observable thought experiments and conclude that 1/5 was a plausible value for k_unsore.

Moral evidence is probably things like preference thought-experiments, neuroscience and physics results, etc. The actual model for this, and discussion about the issues with defining and reasoning on such a prior are outside the scope of this post.

This whole argument couldn't prove its way out of a wet paper bag, and is merely suggestive. Bits and peices may be found incorrect, and formalization might change things a bit.

This framework requires that we have already worked out the outcome-space O (which we haven't), have limited our moral confusion to a set of VNM-rational moral theories over O (which we haven't), and have defined a "Moral Prior" so we can have a probability distribution over moral theories and their wieghts (which we haven't).

Nonetheless, we can sometimes get those things in special limited cases, and even in the general case, having a model for moral uncertainty and updating is a huge step up from the terrifying confusion I (and everyone I've talked to) had before working this out.

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 3:36 PM

I think I'm lost, please bring me back on track- is the intention here to model the updating of moral beliefs after hearing an argument as a special case of updating beliefs concerning factual information about how the world works, rather than a change to the core utility function? (Where moral/preference uncertainty is analogous to uncertainty about how sore your teeth are)

[-][anonymous]11y40

Yes. That is exactly what I meant.

Thanks for the clarification (I'm confused, he answered my question, why all the downvotes?)

If I understand correctly, your model depicts agents with underlying morals/preferences who are also uncertain about what those preferences are. It seems to me that the degrees of freedom allotted by this model allows these agents to exhibit VNM-irrational behavior even if the underlying underlying preferences are VNM consistent. Do you agree?

If you agree - Previously, you stated that you wouldn't consider an agent to have a utility function unless it 1) behaved VNM rationally or 2) had an explicit utility function. The agents you are describing here seem to meet neither criteria, yet have a utility function. Do you still stand by your previous post?

[-][anonymous]11y30

why all the downvotes?

That's confusing me as well.

your model depicts agents with underlying morals/preferences who are also uncertain about what those preferences are.

Yep.

It seems to me that the degrees of freedom allotted by this model allows these agents to exhibit VNM-irrational behavior even if the underlying underlying preferences are VNM consistent. Do you agree?

Not quite. The point was for the agent to be VNM rational in final actions. However, if you only looked at the behavior over the object-level outcome space, you would conclude VNM irrationality, but that's because the agent is playing a higher level game.

Example, you may observe a father playing chess with his daughter, and notice that half way through the game, he starts deliberately making bad moves so she can win faster. If you looked at only the chessboard, you would conclude that he was irrational; he played to win for a while, then threw the game? The revealed preferences are not consistent.

However, if you step up a level, you may notice that she mentioned during the game that she had homework to do, after which he threw the game so that she could get to her homework faster, and get a little boost in motivation from winning. So when you look at this higher level that includes these concerns, he would be acting rational way.

So the thing I did in OP was to formalize this concept of facts that are outside the game that affect how you want to play the game, and applied that to moral uncertainty.

From the outside, though, how could you tell the difference?

Rational agent (yes utility function)

1) I acts a certain way

2) Irrelevant alternative appears, spurring thought processes that cause an introspective insight, leading me to update my beliefs about what my preferences are.

3) I act in another way which is inconsistent with [1]

Irrational Agent (No utility function)

1) I acts a certain way

2) Irrelevant alternative appears.

3) I act in another way which is inconsistent with [1]

From the outside, how would you ever know if an agent is behaving rationally because of some entirely obscured update going on within [2] , or because the agent is in fact irrational and/or does not have a utilility function to begin with?

(And if you can't know from the outside, there isn't a difference, because utility functions are only meant as models for behavior, not descriptions of what goes on inside the black box)

[-][anonymous]11y00

Probably indistinguishable from the outside, except that because we are using bayes to update, theoretically we will get more and more accurate as we make big updates, such that big updates become less and less likely.

The point is we have a prescriptive way to make decisions in the presence of moral uncertainty that won't do anything stupid.

Come to think of it, I totally forgot to show that this satisfies all our intuitions regarding how moral updating ought to behave. No wonder no one cares. Maybe I should write that up.

Probably indistinguishable from the outside

In that case, I'm still confused. Maybe this question will help.

To restate - I just described agents which (behaviorally speaking) appeared to violate one of the VNM axioms. They still qualify as sort-of VNM rational to you, because the weird behavior was a result of an update to what the agent thought its utility function was.

To remove this funny "I don't know what my utility function is" business, let's split our agent into two: Agent R is bounded-rational, and it's utility function is simply "do what Agent M wants". Agent M has a complex utility function of morality and teeth soreness, which is partially obscure to Agent R. Agent R makes evidence-based updates concerning both the outside world and M's utility function. (Functionally, this is the same as having one agent which is unsure of what it's utility function is, but it seems easier to talk about)

Am I still following you correctly?

So here is my question:

Are some humans contained in the outlined set of sort-of VNM compliant agents? And if not, what quality excludes them from the set?

[-][anonymous]11y20

To remove this funny "I don't know what my utility function is" business, let's split our agent into two: Agent R is bounded-rational, and it's utility function is simply "do what Agent M wants". Agent M has a complex utility function of morality and teeth soreness, which is partially obscure to Agent R. Agent R makes evidence-based updates concerning both the outside world and M's utility function. (Functionally, this is the same as having one agent which is unsure of what it's utility function is, but it seems easier to talk about)

Am I still following you correctly?

Conceiving of it as two separate agents is a bit funny, but yeah, that's more or less the right model.

I think of it as "I know what my utility function is, but the utility of outcomes depends on some important moral facts that I don't know about."

Are some humans contained in the outlined set of sort-of VNM compliant agents? And if not, what quality excludes them from the set?

No. I assert in "we don't have a utility function" that we (all humans) do not have a utility function. Of course I could be wrong.

As I said, humans are excluded on both acting in a sane and consistent way, and on knowing what we even want.

Actually, in some sense, the question of whether X is a VNM agent is uninteresting. It's like the question of whether X is a heat engine. If you twist things around enough, even a rock could be a heat engine or a VNM agent with zero efficiency or a preference for accelerating in the direction of gravity.

The point of VNM, and of Thermodynamics, is as analysis tools for analyzing systems that we are designing. Everything is a heat engine, but some are more efficient/usable than others. Likewise with agents, everything is an agent, but some produce outcomes that we like and others do not.

So with applying VNM to humans the question is not whether we descriptively have utility functions or whatever; the question is if and how we can use a VNM analysis to make useful changes in behavior, or how we can build a system that produces valuable outcomes.

So the point of this moral uncertainty business is "oh look, if we concieve of moral uncertainty like this, we can provably meet these criteria and solve those problems in a coherent way".

OK, I think we're on the same page now.

A utility function is only a method of approximating an agent's behavior. If I wanted to make a precise description, I wouldn't bother "agent-izing" the object in the first place. The rock falls vs. the rock wants to fall is a meaningless distinction. In that sense, nothing "has a utility function", since utility functions aren't ontologically fundamental.

When I say "does X have a utility function", I mean "Is it useful and intuitive to predict the behavior of X by ascribing agency to it and using a utility function". So the real question is, do humans deviate from the model to such an extent that the model should not be used? It certainly doesn't seem like the model describes anything else better than it describes humans - although as AI improves that might change.

So even if I agree that humans don't technically "have a utility function" anymore than any other object, I would say that if anything on this planet is worth ascribing agency and using a utility function to describe, it's animals. So if humans and other animals don't have a utility function, who does?

[-][anonymous]11y10

So if humans and other animals don't have a utility function, who does?

No one yet. We're working on it.

So the real question is, do humans deviate from the model to such an extent that the model should not be used?

Yes. You will find it much more fruitful to predict most humans as causal systems (including youself), and if you wanted to model human behavior with a utility function, you'd either have a lot of error, or a lot of trouble adding enough epicycles.

As I said though, VNM isn't useful descriptively; if you use it like that, it's tautological, and doesn't really tell you anything. Where it shines is in design of agenty systems; "If we had these preferences, what would that imply about where we would steer the future" (which worlds are ranked high) "if we want to steer the future over there, what decision architecture do we need?".

That's confusing me as well.

I don't know either (ie. it wasn't me) but perhaps "Yes. That is exactly what I meant." would work better with a quoted sentence that reveals the 'that' in question?

Presentation comment:

  1. Your math is nice and clear and I enjoyed reading this article with accessible technical detail.
  2. Your presentation of it could use a little more whitespace (around the composition dots (which could be · instead of .), particularly) and horizontal alignment (especially where something to the right of an = becomes to the left of it on the next line). Or actually use LaTeX for standard formatting; I'd be happy to help with the conversion.
[-][anonymous]11y20

Your math is nice and clear and I enjoyed reading this article with accessible technical detail.

Thank you

Your presentation of it could use a little more whitespace (around the composition dots (which could be · instead of .),

Right. Some of that is the LW engine eats whitespace in

 elements for some reason.

Or actually use LaTeX for standard formatting; I'd be happy to help with the conversion.

This is probably what I should do. I'll take a look at your link and see what I can do. If I have questions, you volunteered.

[-][anonymous]11y20

Moved to discussion because I forgot to actually motivate the problem and show that this solved it.

I will write that up and re-post it some time.

I was tempted to downvote the post: it did not have a summary upfront, the examples were confusing, the notation unnecessarily obfuscated, the purpose unclear. Looking forward to something better.

This just feels really promising, although I can't say I've really followed it all (you've lost me a couple posts ago on the math, but that's my fault). I'm waiting eagerly for the re-post.

[-][anonymous]11y00

Sorry for the long delay. I'm actually polishing up the next version right at this very moment. Expect something soon.