Squark comments on On not diversifying charity - Less Wrong

1 Post author: DanielLC 14 March 2014 05:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 14 March 2014 09:39:28PM 0 points [-]

Can you give an example of situations A, B, C for which your preferences are A > B, B > C, C > A? What would you do if you need to choose between A, B, C?

Comment author: asr 15 March 2014 04:02:06PM 0 points [-]

Sure. I'll go to the grocery store and have three kinds of tomato sauce and I'll look at A and B, and pick B, then B and C, pick C, and C and A, and pick A. And I'll stare at them indecisively until my preferences shift. It's sort of ridiculous -- it can take something like a minute to decide. This is NOT the same as feeling indifferent, in which case I would just pick one and go.

I have similar experiences when choosing between entertainment options, transport, etc. My impression is that this is an experience that many people have.

If you google "intransitive preference" you get a bunch of references -- this one has cites to the original experiements: http://www.stanford.edu/class/symbsys170/Preference.pdf

Comment author: Squark 15 March 2014 04:48:48PM 0 points [-]

It seems to me that what you're describing are not preferences but spur of the moment decisions. A preference should be thought of as in CEV: the thing you would prefer if you thought about it long enough, knew enough, were more the person you want to be etc. The mere fact you somehow decide between the sauces in the end suggests you're not describing a preference. Also I doubt that you have terminal values related to tomato sauce. More likely, your terminal values involve something like "experiencing pleasure" and your problem here is epistemic rather than "moral": you're not sure which sauce would give you more pleasure.

Comment author: asr 15 March 2014 11:06:10PM 1 point [-]

You are using preference to mean something other than I thought you were.

I'm not convinced that the CEV definition of preference is useful. No actual human ever has infinite time or information; we are always making decisions while we are limited computationally and informationally. You can't just define away those limits. And I'm not at all convinced that our preferences would converge even given infinite time. That's an assumption, not a theorem.

When buying pasta sauce, I have multiple incommensurable values: money, health, and taste. And in general, when you have multiple criteria, there's no non-paradoxical way to do rankings. (This is basically Arrow's theorem). And I suspect that's the cause for my lack of preference ordering.

Comment author: Squark 16 March 2014 08:06:14AM 0 points [-]

No actual human ever has infinite time or information

Of course. But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.

When buying pasta sauce, I have multiple incommensurable values: money, health, and taste.

Money is not a terminal value for most people. I suspect you want money because of the things it can buy you, not as a value in itself. I think health is also instrumental. We value health because illness is unpleasant, might lead to death and generally interferes with taking actions to optimize our values. The unpleasant sensations of illness might well be commensurable with the pleasant sensations of taste. For example you would probably pass up a gourmet meal if eating it implies getting cancer.

Comment author: Lumifer 16 March 2014 06:16:51PM 2 points [-]

But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.

However you can not know what decisions you would make if you had infinite time and information. You can make guesses based on your ideas of convergence, but that's about it.

Comment author: Squark 17 March 2014 07:47:12PM -2 points [-]

A Bayesian never "knows" anything. She can only compute probabilities and expectation values.

Comment author: Lumifer 17 March 2014 08:15:55PM *  1 point [-]

Can she compute probabilities and expectation values with respect to decisions she would make if she had infinite time and information?

Comment author: Squark 19 March 2014 07:49:30PM 0 points [-]

I think it should be possible to compute probabilities and expectation values of absolutely anything. However to put it on a sound mathematical basis we need a theory of logical uncertainty.

Comment author: Lumifer 19 March 2014 08:08:07PM 0 points [-]

I think it should be possible to compute probabilities and expectation values of absolutely anything.

On the basis of what do you think so? And what entity will be doing the computing?