timtyler comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong

24 Post author: SilasBarta 30 August 2010 09:37PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (270)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 31 August 2010 07:34:49PM 0 points [-]

Diamonds are not fungible - and yet they have prices. Same difference here, I figure.

Comment author: pjeby 31 August 2010 08:30:32PM *  2 points [-]

Diamonds are not fungible - and yet they have prices.

What's the price of one red paperclip? Is it the same price as a house?

Comment author: timtyler 31 August 2010 08:48:30PM *  0 points [-]

That seems to be of questionable relevance - since utilities in decision theory are all inside a single agent. Different agents having different values is not an issue in such contexts.

Comment author: pjeby 31 August 2010 09:15:10PM 1 point [-]

utilities in decision theory are all inside a single agent

That's a big part of the problem right there: humans aren't "single agents" in this sense.

Comment author: timtyler 31 August 2010 09:51:11PM 0 points [-]

Humans are single agents in a number of senses - and are individual enough for the idea of revealed preference to be useful.

Comment author: pjeby 31 August 2010 10:04:15PM 1 point [-]

From the page you linked (emphasis added):

In the real world, when it is observed that a consumer purchased an orange, it is impossible to say what good or set of goods or behavioral options were discarded in preference of purchasing an orange. In this sense, preference is not revealed at all in the sense of ordinal utility.

However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That's what makes ordinal utilities a "spherical cow" abstraction.

(WARP's inapplicability when applied to real (non-spherical) humans, in one sentence: "I feel like having an apple today, instead of an orange." QED: humans are not "economic agents" under WARP, since they don't consistently choose A over B in environments where both A and B are available.)

Comment author: timtyler 31 August 2010 10:16:02PM 0 points [-]

However, even if you ignore that, WARP is trivially proven false by actual human behavior: people demonstrably do sometimes choose differently based on context. That's what makes ordinal utilities a "spherical cow" abstraction.

The first sentence is true - but the second sentence doesn't follow from it logically - or in any other way I can see.

It is true that there are some problems modelling humans as von Neumann–Morgenstern agents - but that's no reason to throw out the concept of utility. Utility is a much more fundamental and useful concept.

Comment author: pjeby 31 August 2010 10:22:05PM 2 points [-]

The first sentence is true - but the second sentence doesn't follow from it logically - or in any other way I can see

WARP can't be used to predict a human's behavior in even the most trivial real situations. That makes it a "spherical cow" because it's a simplifying assumption adopted to make the math easier, at the cost of predictive accuracy.

It is true that there are some problems modelling humans as von Neumann–Morgenstern agents - but that's no reason to throw out the concept of utility.

That sounds to me uncannily similar to, "it is true that there are some problems modeling celestial movement using crystal spheres -- but that's no reason to throw out the concept of celestial bodies moving in perfect circles."

Comment author: timtyler 31 August 2010 10:26:14PM *  0 points [-]

That sounds to me uncannily similar to [...]

There is an obvious surface similarity - but so what? You constructed the sentence that way deliberately. You would need to make an analogy for arguing like that to have any force - and the required analogy looks like a bad one to me.

Comment author: pjeby 31 August 2010 10:40:36PM 2 points [-]

You would need to make an analogy for arguing like that to have any force - and the required analogy looks like a bad one to me.

How so? I'm pointing out that the only actual intelligent agents we know of don't actually work like economic agents on the inside. That seems like a very strong analogy to Newtonian gravity vs. "crystal spheres".

Economic agency/utility models may have the Platonic purity of crystal spheres, but:

  1. We know for a fact they're not what actually happens in reality, and

  2. They have to be tortured considerably to make them "predict" what happens in reality.

Comment author: timtyler 31 August 2010 10:24:49PM 0 points [-]

WARP can't be used to predict a human's behavior in even the most trivial real situations. That makes it a "spherical cow"

Sure - but whay you claimed was a "spherical cow" was "ordinal utilities" which is a totally different concept.

Comment author: pjeby 31 August 2010 10:36:37PM 0 points [-]

Sure - but whay you claimed was a "spherical cow" was "ordinal utilities" which is a totally different concept.

It was you who brought the revealed preferences into it, in order to claim that humans were close enough to spherical cows. I merely pointed out that revealed preferences in even their weakest form are just another spherical cow, and thus don't constitute evidence for the usefulness of ordinal utility.