pjeby comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 31 August 2010 05:05:30PM 0 points [-]

They can give us a nice descriptive theory of what idiots people really are. I am more interested in a nice normative theory of what geniuses people ought to be.

What you seem to have not noticed is that one key reason human preferences can be inconsistent is because they are represented in a more expressive formal system than a single utility value.

Or that conversely, the very fact that utility functions are linearizable means that they are inherently less expressive.

Now, I'm not saying "more expressiveness is always better", because, being human, I have the ability to value things non-fungibly. ;-)

However, in any context where we wish to be able to mathematically represent human preferences -- and where lives are on the line by doing so -- we would be throwing away important, valuable information by pretending we can map a partial ordering to a total ordering.

That's why I consider the "economic games assumption" to be a spherical cow assumption. It works nicely enough for toy problems, but not for real-world ones.

Heck, I'll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)

Comment author: Perplexed 31 August 2010 05:16:57PM 1 point [-]

Heck, I'll go so far as to suggest that unless one has done programming or mathematics work involving partial orderings, that one is unlikely to really understand just how non-linearizable the world is. (Though I imagine there may be other domains where one might encounter similar experiences.)

Programming and math are definitely the fields where most of my experience with partial orders comes from. Particularly domain theory and denotational semantics. Complete partial orders and all that. But the concepts also show up in economics textbooks. The whole concept of Pareto optimality is based on partial orders. As is demand theory in micro-economics. Indifference curves.

Theorists are not as ignorant or mathematically naive as you seem to imagine.

Comment author: timtyler 31 August 2010 07:37:33PM *  -2 points [-]

the very fact that utility functions are linearizable means that they are inherently less expressive.

You are talking about the independence axiom...?

You can just drop that, you know:

"Of all the axioms, independence is the most often discarded. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom."

Comment author: pjeby 31 August 2010 08:29:01PM -1 points [-]

You are talking about the independence axiom...?

As far as I can tell from the discussion you linked, those axioms are based on an assumption that value is fungible. (In other words, they're begging the question, relative to this discussion.)

Comment author: timtyler 31 August 2010 08:42:38PM *  0 points [-]

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place. That's a pretty general framework - about the only assumption that can be argued with is its quantising of spacetime.

The von Neumann-Morgenstern axioms layer on top of that basic idea. The independence axiom is the one about combining utilities by adding them up. I would say it is the one most closely associated with fungibility.

Comment author: pjeby 31 August 2010 09:28:51PM 2 points [-]

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place.

And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).

Humans don't compute utility, then make a decision. Heck, we don't even "make decisions" unless there's some kind of ambiguity, at which point we do the rough equivalent of making up a new utility function, specifically to resolve the conflict that forced us pay conscious attention in the first place!

This is a major (if not the major) "impedance mismatch" between linear "rationality" and actual human values. Our own thought processes are so thoroughly and utterly steeped in context-dependence that it's really hard to see just how alien the behavior of an intelligence based on a consistent, context-independent utility would be.

Comment author: timtyler 31 August 2010 09:36:07PM 0 points [-]

The basis of using utilities is that you can consider agent's possible actions, assign real-valued utilities to them, and then choose the one with the most utility. If you can use a utility function built from a partially-recursive language, then you can always do that - provided that your decision process is computable in the first place.

And that is not what humans do (although we can of course lamely attempt to mimic that approach by trying to turn off all our parallel processing and pretending to be a cheap sequential computer instead).

There's nothing serial about utility maximisation!

...and it really doesn't matter how the human works inside. That type of general framework can model the behaviour of any computable agent.

Comment author: pjeby 31 August 2010 09:46:18PM 3 points [-]

There's nothing serial about utility maximisation!

I didn't say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren't set up to do it in parallel.

...and it really doesn't matter how the human works inside. That type of general framework can model the behaviour of any computable agent.

Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)

Comment author: timtyler 31 August 2010 09:55:18PM 0 points [-]

There's nothing serial about utility maximisation!

I didn't say there was. I said that humans needed to switch to slow serial processing in order to do it, because our brains aren't set up to do it in parallel.

I think this indicates something about where the problem lies. You are apparently imagining an agent consciously calculating utilities. That idea has nothing to do with the idea that utility framework proponents are talking about.

Comment author: pjeby 31 August 2010 10:06:11PM 2 points [-]

You are apparently imagining an agent consciously calculating utilities.

No, I said that's what a human would have to do in order to actually calculate utilities, since we don't have utility-calculating hardware.

Comment author: timtyler 31 August 2010 10:09:28PM 0 points [-]

Ah - OK, then.

Comment author: wnoise 31 August 2010 10:00:35PM 0 points [-]

When humans don't consciously calculate, the actions they take are much harder to fit into a utility-maximizing framework, what with inconsistencies cropping up everywhere.

Comment author: timtyler 31 August 2010 10:06:34PM *  0 points [-]

It depends on the utility-maximizing framework you are talking about - some are more general than others - and some are really very general.

Comment author: FAWS 31 August 2010 09:55:08PM 0 points [-]

Great! So you can show me how to use a utility function to model being indecisive or uncertain, then? ;-)

Negative term for having made what later turns out to have been a wrong decision, perhaps proportional to the importance of the decision, and choices otherwise close to each other in expected utility, but with a large potential difference in actually realized utility.