timtyler comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 30 August 2010 11:47:23PM 2 points [-]

Humans don't operate by maximizing utility, for any definition of "utility" that isn't hideously tortured.

Actually, the definition of "utility" is pretty simple. It is simply "that thing that gets maximized in any particular person's decision making". Perhaps you think that humans do not maximize utility because you have a preferred definition of utility that is different from this one.

Mostly, we simply act in ways that keep the expected value of relevant perceptual variables (such as our own feelings) within our personally-defined tolerances.

Ok, that is a plausible sounding alternative to the idea of maximizing something. But the maximizing theory has been under scrutiny for 150 years, and under strong scrutiny for the past 50. It only seems fair to give your idea some scrutiny too. Two questions jump out at me:

  • What decision is made when multiple choices all leave the variables within tolerance?
  • What decision is made when none of the available choices leave the variables within tolerance?

Looking forward to hearing your answer on these points. If we can turn your idea into a consistent and plausible theory of human decision making, I'm sure we can publish it.

Comment author: timtyler 31 August 2010 09:42:56AM 1 point [-]

Mostly, we simply act in ways that keep the expected value of relevant perceptual variables (such as our own feelings) within our personally-defined tolerances.

Ok, that is a plausible sounding alternative to the idea of maximizing something.

It looks as though it can be rearranged into a utility-maximization representation pretty easily. Set utility equal to minus the extent to which the "personally-defined tolerances" are exceeded. Presto!

Comment author: pjeby 31 August 2010 03:45:52PM 2 points [-]

It looks as though it can be rearranged into a utility-maximization representation pretty easily. Set utility equal to minus the extent to which the "personally-defined tolerances" are exceeded. Presto!

Not quite - this would imply that tolerance-difference is fungible, and it's not. We can make trade-offs in our decision-making, but that requires conscious effort and it's a process more akin to barter than to money-trading.

Comment author: timtyler 31 August 2010 07:34:49PM 0 points [-]

Diamonds are not fungible - and yet they have prices. Same difference here, I figure.

Comment author: pjeby 31 August 2010 08:30:32PM *  2 points [-]

Diamonds are not fungible - and yet they have prices.

What's the price of one red paperclip? Is it the same price as a house?

Comment author: timtyler 31 August 2010 08:48:30PM *  0 points [-]

That seems to be of questionable relevance - since utilities in decision theory are all inside a single agent. Different agents having different values is not an issue in such contexts.

Comment author: pjeby 31 August 2010 09:15:10PM 1 point [-]

utilities in decision theory are all inside a single agent

That's a big part of the problem right there: humans aren't "single agents" in this sense.

Comment author: timtyler 31 August 2010 09:51:11PM 0 points [-]

Humans are single agents in a number of senses - and are individual enough for the idea of revealed preference to be useful.

Comment author: pjeby 31 August 2010 10:04:15PM 1 point [-]

From the page you linked (emphasis added):

In the real world, when it is observed that a consumer purchased an orange, it is impossible to say what good or set of goods or behavioral options were discarded in preference of purchasing an orange. In this sense, preference is not revealed at all in the sense of ordinal utility.

However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That's what makes ordinal utilities a "spherical cow" abstraction.

(WARP's inapplicability when applied to real (non-spherical) humans, in one sentence: "I feel like having an apple today, instead of an orange." QED: humans are not "economic agents" under WARP, since they don't consistently choose A over B in environments where both A and B are available.)

Comment author: timtyler 31 August 2010 10:16:02PM 0 points [-]

However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That's what makes ordinal utilities a "spherical cow" abstraction.

The first sentence is true - but the second sentence doesn't follow from it logically - or in any other way I can see.

It is true that there are some problems modelling humans as von Neumann–Morgenstern agents - but that's no reason to throw out the concept of utility. Utility is a much more fundamental and useful concept.