Vladimir_Nesov comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 31 August 2010 12:03:03AM *  3 points [-]

It's a losing battle to describe humans as utility maximizers. Utility, as applied to people, is more useful in the normative sense, as a way to formulate one's wishes, allowing to infer the way one should act in order to follow them.

Comment author: Perplexed 31 August 2010 12:20:59AM 1 point [-]

Nevertheless, standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers. And the reason it does so is the belief that on the really important decisions, people work extra hard to be rational.

For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.

Comment author: wedrifid 31 August 2010 04:07:45AM 6 points [-]

Nevertheless, standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers. And the reason it does so is the belief that on the really important decisions, people work extra hard to be rational.

The reason it does so is because it is convenient.

I don't entirely agree with pgeby. Being unable to adequately approximate human preferences to a single utility function is not something that is a property of the 'real world'. It is something that is a property of our rather significant limitations when it comes to making such evaluations. Nevertheless, having a textbook prescribe official status to certain mechanisms for deriving a utility function does not make that process at all reliable.

Comment author: Perplexed 31 August 2010 04:21:47AM 0 points [-]

... having a textbook prescribe official status to certain mechanisms for deriving a utility function does not make that process at all reliable.

I'll be sure to remember that line, for when the people promoting other models of rationality start citing textbooks too. Well, no, I probably won't, since I doubt I will live long enough to see that. ;)

But, if I recall correctly, I have mostly cited the standard textbook thought-experiments when responding to claims that utility maximization is conceptually incoherent - so absurd that no one in their right mind would propose it.

Comment author: wedrifid 31 August 2010 04:26:52AM *  4 points [-]

I'll be sure to remember that line, for when the people promoting other models of rationality start citing textbooks too. Well, no, I probably won't, since I doubt I will live long enough to see that. ;)

I see that you are trying to be snide, but it took a while to figure out why you would believe this to be incisive. I had to reconstruct a model of what you think other people here believe from your previous rants.

But, if I recall correctly, I have mostly cited the standard textbook thought-experiments when responding to claims that utility maximization is conceptually incoherent - so absurd that no one in their right mind would propose it.

Yes. That would be a crazy thing to believe. (Mind you, I don't think pjeby believes crazy things - he just isn't listening closely enough to what you are saying to notice anything other than a nail upon which to use one of his favourite hammers.)

Comment author: pjeby 31 August 2010 03:13:49AM 4 points [-]

For this reason, on the really important decisions, utility maximization probably is not too far wrong as a descriptive theory.

It seems to me that what has actually been shown is that when people think abstractly (i.e. "far") about these kinds of decisions, they attempt to calculate some sort of (local and extremely context-dependent) maximum utility.

However, when people actually act (using "near" thinking), they tend to do so based on the kind of perceptual filtering discussed in this thread.

What's more, even their "far" calculations tend to be biased and filtered by the same sort of perceptual filtering processes, even when they are (theoretically) calculating "utility" according to a contextually-chosen definition of utility. (What a person decides to weigh into a calculation of "best car" is going to vary from one day to the next, based on priming and other factors.)

In the very best case scenario for utility maximization, we aren't even all that motivated to go out and maximize utility: it's still more like playing, "pick the best perceived-available option", which is really not the same thing as operating to maximize utility (e.g. the number of paperclips in the world). Even the most paperclip-obsessed human being wouldn't be able to do a good job of intuiting the likely behavior of a true paperclip-maximizing agent -- even if said agent were of only-human intelligence.

standard economic game theory frequently involves an assumption that it is common knowledge that all players are rational utility maximizers.

For me, I'm not sure that "rational" and "utility maximizer" belong in the same sentence. ;-)

In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don't mix under real world conditions. You can't measure a human's perception of "utility" on just a single axis!

Comment author: Perplexed 31 August 2010 03:24:36AM 3 points [-]

For me, I'm not sure that "rational" and "utility maximizer" belong in the same sentence. ;-)

In simplified economic games (think: spherical cows on a frictionless plane), you can perhaps get away with such silliness, but instrumental rationality and fungible utility don't mix under real world conditions.

You have successfully communicated your scorn. You were much less successful at convincing anyone of your understanding of the facts.

You can't measure a human's perception of "utility" on just a single axis!

And you can't (consistently) make a decision without comparing the alternatives along a single axis. And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.

Comment author: pjeby 31 August 2010 03:38:42AM 1 point [-]

And you can't (consistently) make a decision without comparing the alternatives along a single axis.

And what makes you think humans are any good at making consistent decisions?

The experimental evidence clearly says we're not: frame a problem in two different ways, you get two different answers. Give us larger dishes of food, and we eat more of it, even if we don't like the taste! Prime us with a number, and it changes what we'll say we're willing to pay for something utterly unrelated to the number.

Human beings are inconsistent by default.

And there are dozens of textbooks with a chapter explaining in detail exactly how you go about doing it.

Of course. But that's not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more "rationally" you weigh a decision, the less likely you are to be happy with the results.

(Which is probably a factor in why smarter, more "rational" people are often less happy than their less-rational counterparts.)

In addition, other experiments show that people who make choices in "maximizer" style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.

Comment author: wedrifid 31 August 2010 04:23:23AM 6 points [-]

In addition, other experiments show that people who make choices in "maximizer" style (people who are unwilling to choose until they are convinced they have the best choice) are consistently less satisfied than people who are satisficers for the same decision context.

It seems there is some criteria by which you are evaluating various strategies for making decisions. Assuming you are not merely trying to enforce your deontological whims upon your fellow humans I can infer that there is some kind of rough utility function by which you are giving your advice and advocating decision making mechanisms. While it is certainly not what we would find in Perplexed's text books it is this function which can be appropriately described as a 'rational utility function'.

Of course. But that's not how human beings generally make decisions, and there is experimental evidence that shows such linearized decision algorithms are abysmal at making people happy with their decisions! The more "rationally" you weigh a decision, the less likely you are to be happy with the results.

I am glad that you included the scare quotes around 'rationally'. It is 'rational' to do what is going to get the best results. It is important to realise the difference between 'sucking at making linearized spock-like decisions' and for good decisions being in principle uncomputable in a linearized manner. If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.

Comment author: pjeby 31 August 2010 04:54:07PM 2 points [-]

If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.

Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.

For example, in predicate dispatching, priority is based on logical implication relationships between conditions, but an arbitrary set of applicable conditions isn't guaranteed to have a total (i.e. linear) ordering.

What I'm saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable "utility" function necessarily loses information from that preference ordering.

That's why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It's insane.

Comment author: xamdam 31 August 2010 07:49:10PM 3 points [-]

If you can say that one decision sucks more than another one then you have criteria by which to sort them in a linearized manner.

Not at all. Even in pure computational systems, being able to compare two things is not the same as having a total ordering.

Am I correct thinking that you welcome money pumps?

Comment author: pjeby 01 September 2010 03:38:18PM 4 points [-]

Am I correct thinking that you welcome money pumps?

A partial order isn't the same thing as a cyclical ordering, and the existence of a money pump would certainly tend to disambiguate a human's preferences in its vicinity, thereby creating a total ordering within that local part of the preference graph. ;-)

Comment author: saturn 02 September 2010 09:20:15PM 0 points [-]

Hypothetically, would it cause a problem if a human somehow disambiguated her entire preference graph?

Comment author: Perplexed 31 August 2010 05:05:32PM 2 points [-]

What I'm saying is that human preferences generally express only a partial ordering, which means that mapping to a linearizable "utility" function necessarily loses information from that preference ordering.

True enough.

That's why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It's insane.

But the information loss is "just in time" - it doesn't take place until actually making a decision. The information about utilities that is "stored" is a mapping from states-of-the-world to ordinal utilities of each "result". That is, in effect, a partial order of result utilities. Result A is better than result B in some states of the world, but the preference is reversed in other states.

You don't convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment - the moment when you have to make the decision.

Comment author: pjeby 31 August 2010 05:16:30PM 1 point [-]

You don't convert that partial order into a total order until you form a weighted average of utilities using your subjective estimates of the state-of-the-world probability distribution. That takes place at the last possible moment - the moment when you have to make the decision.

Go implement yourself a predicate dispatch system (not even an AI, just a simple rules system), and then come back and tell me how you will linearize a preference order between non-mutually exclusive, overlapping conditions. If you can do it in a non-arbitrary (i.e. noise-injecting) way, there's probably a computer science doctorate in it for you, if not a math Nobel.

If you can do that, I'll happily admit being wrong, and steal your algorithm for my predicate dispatch implementation.

(Note: predicate dispatch is like a super-baby-toy version of what an actual AI would need to be able to do, and something that human brains can do in hardware -- i.e., we automatically apply the most-specific matching rules for a given situation, and kick ambiguities and conflicts up to a higher-level for disambiguation and post-processing. Linearization, however, is not the same thing as disambiguation; it's just injecting noise into the selection process.)

Comment author: Perplexed 31 August 2010 05:25:38PM 2 points [-]

I am impressed with your expertise. I just built a simple natural deduction theorem prover for my project in AI class. Used Lisp. Python didn't even exist back then. Nor Scheme. Prolog was just beginning to generate some interest. Way back in the dark ages.

But this is relevant ... how exactly? I am talking about choosing among alternatives after you have done all of your analysis of the expected results of the relevant decision alternatives. What are you talking about?

Comment author: timtyler 01 September 2010 12:42:40PM 0 points [-]

It doesn't literally lose information - since the information inputs are sensory, and they can be archived as well as ever.

The short answer is that human cognition is a mess. We don't want to reproduce all the screw-ups in an intelligent machine - and what you are talking about lookss like one of the mistakes.

Comment author: pjeby 01 September 2010 03:36:34PM 2 points [-]

It doesn't literally lose information - since the information inputs are sensory, and they can be archived as well as ever.

It loses information about human values, replacing them with noise in regions where a human would need to "think things over" to know what they think... unless, as I said earlier, you simply build the entire human metacognitive architecture into your utility function, at which point you have reduced nothing, solved nothing, accomplished nothing, except to multiply the number of entities in your theory.

Comment author: wedrifid 31 August 2010 05:13:52PM -2 points [-]

That's why building an AI that makes decisions on such a basis is a really, really Bad Idea. Why build that kind of information loss into your ground rules? It's insane.

Perplexed answered this question well.

Comment author: Perplexed 31 August 2010 04:34:52AM 1 point [-]

And you can't (consistently) make a decision without comparing the alternatives along a single axis.

And what makes you think humans are any good at making consistent decisions?

Nothing make me think that. I don't even care. That is the business of people like Tversky and Kahneman.

They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.

Comment author: pjeby 31 August 2010 05:05:30PM 0 points [-]

They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.

What you seem to have not noticed is that one key reason human preferences can be inconsistent is because they are represented in a more expressive formal system than a single utility value.

Or that conversely, the very fact that utility functions are linearizable means that they are inherently less expressive.

Now, I'm not saying "more expressiveness is always better", because, being human, I have the ability to value things non-fungibly. ;-)

However, in any context where we wish to be able to mathematically represent human preferences -- and where lives are on the line by doing so -- we would be throwing away important, valuable information by pretending we can map a partial ordering to a total ordering.

That's why I consider the "economic games assumption" to be a spherical cow assumption. It works nicely enough for toy problems, but not for real-world ones.

Heck, I'll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)

Comment author: Perplexed 31 August 2010 05:16:57PM 1 point [-]

Heck, I'll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)

Programming and math are definitely the fields where most of my experience with partial orders comes from. Particularly domain theory and denotational semantics. Complete partial orders and all that. But the concepts also show up in economics textbooks. The whole concept of Pareto optimality is based on partial orders. As is demand theory in micro-economics. Indifference curves.

Theorists are not as ignorant or mathematically naive as you seem to imagine.

Comment author: timtyler 31 August 2010 07:37:33PM *  -2 points [-]

the very fact that utility functions are linearizable means that they are inherently less expressive.

You are talking about the independence axiom...?

You can just drop that, you know:

"Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom."

Comment author: pjeby 31 August 2010 08:29:01PM -1 points [-]

You are talking about the independence axiom...?

As far as I can tell from the discussion you linked, those axioms are based on an assumption that value is fungible. (In other words, they're begging the question, relative to this discussion.)

Comment author: timtyler 31 August 2010 08:42:38PM *  0 points [-]

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place. That's a pretty general framework - about the only assumption that can be argued with is its quantising of spacetime.

The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.