Squark comments on On not diversifying charity - Less Wrong

1 Post author: DanielLC 14 March 2014 05:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread.

Comment author: Squark 14 March 2014 07:59:07AM *  3 points [-]

If you're risk-averse, it gets a little more complicated. In this case, you don't use expected utility

As long as you're a rational agent, you have to use expected utility. See VNM theorem.

Comment author: solipsist 14 March 2014 04:50:43PM *  2 points [-]

To be clear: your VNM utility function does not have to correspond directly to utilitarian utility if you are not a strict utilitarian. Even if you are a strict utilitarian, diversifying donations can still, in theory, be VNM rational. E.g.:

A trustworthy Omega appears. He informs you that if you are not personally are responsible for saving 1,000 QALYs, he will destroy the earth. If you succeed, he will leave the earth alone. Under these contrived conditions, the amount of good you are responsible for is important, and you should be very risk-averse with that quantity. If there's even a 1 in a million risk that the 7 effective charities you donated to were all, by coincidence, frauds, you would be well advised to donate to an eighth (even though the eighth charity will not as effective as the other seven).

Comment author: Squark 14 March 2014 07:07:02PM 0 points [-]

Diversifying donations is not rational as long as the marginal utility per dollar generated by a charity is affected negligibly by the the small sum you are donating. This assumption seems correct for a large class of utility functions under realistic conditions.

Comment author: Lumifer 14 March 2014 02:39:35PM 1 point [-]

As long as you're a rational agent, you have to use expected utility. See VNM theorem.

That seems to be a rather narrow and not very useful definition of a "rational agent" as applied to humans.

Comment author: Squark 14 March 2014 02:45:19PM -1 points [-]

I think it is the correct definition in the sense that you should behave like one.

Comment author: Lumifer 14 March 2014 03:02:01PM 2 points [-]

Why should I behave as if my values satisfy the VNM axioms?

Rationality here is typically defined as either epistemic (make sure your mental models match reality well) or instrumental (make sure the steps you take actually lead to your goals).

Defining rationality as "you MUST have a single utility function which MUST follow VNM" doesn't strike me as a good idea.

Comment author: Squark 14 March 2014 03:29:17PM -1 points [-]

Because the VNM axioms seem so intuitively obvious that violating them strongly feels like making an error. Of course I cannot prove them without introducing another set of axioms which can be questioned in turn etc. You always need to start with some assumptions.

Which VNM axiom would you reject?

Comment author: Eugine_Nier 18 March 2014 02:13:28AM 0 points [-]

So you care more about following the VNM axioms, then which utility function you are maximizing? That behavior is itself not VNM rational.

Comment author: Squark 19 March 2014 07:53:42PM -1 points [-]

If you don't follow the VNM axioms you are not maximizing any utility function.

Comment author: Eugine_Nier 22 March 2014 06:34:03AM -1 points [-]

So why do you care about maximizing any utility function?

Comment author: Squark 23 March 2014 05:26:40PM 0 points [-]

What would constitute a valid answer to that question, from your point of view?

Comment author: Eugine_Nier 23 March 2014 05:43:30PM 0 points [-]

I can't think of one. You're the one arguing for what appears to be an inconsistent position.

Comment author: asr 14 March 2014 09:08:05PM 1 point [-]

I would reject the completeness axiom. I often face choices where I don't know which option I prefer, but where I would not agree that I am indifferent. And I'm okay with this fact.

I also reject the transitivity axiom -- intransitive preference is an observed fact for real humans in a wide variety of settings. And you might say this is irrational, but my preference are what they are.

Comment author: Squark 14 March 2014 09:39:28PM 0 points [-]

Can you give an example of situations A, B, C for which your preferences are A > B, B > C, C > A? What would you do if you need to choose between A, B, C?

Comment author: asr 15 March 2014 04:02:06PM 0 points [-]

Sure. I'll go to the grocery store and have three kinds of tomato sauce and I'll look at A and B, and pick B, then B and C, pick C, and C and A, and pick A. And I'll stare at them indecisively until my preferences shift. It's sort of ridiculous -- it can take something like a minute to decide. This is NOT the same as feeling indifferent, in which case I would just pick one and go.

I have similar experiences when choosing between entertainment options, transport, etc. My impression is that this is an experience that many people have.

If you google "intransitive preference" you get a bunch of references -- this one has cites to the original experiements: http://www.stanford.edu/class/symbsys170/Preference.pdf

Comment author: Squark 15 March 2014 04:48:48PM 0 points [-]

It seems to me that what you're describing are not preferences but spur of the moment decisions. A preference should be thought of as in CEV: the thing you would prefer if you thought about it long enough, knew enough, were more the person you want to be etc. The mere fact you somehow decide between the sauces in the end suggests you're not describing a preference. Also I doubt that you have terminal values related to tomato sauce. More likely, your terminal values involve something like "experiencing pleasure" and your problem here is epistemic rather than "moral": you're not sure which sauce would give you more pleasure.

Comment author: asr 15 March 2014 11:06:10PM 1 point [-]

You are using preference to mean something other than I thought you were.

I'm not convinced that the CEV definition of preference is useful. No actual human ever has infinite time or information; we are always making decisions while we are limited computationally and informationally. You can't just define away those limits. And I'm not at all convinced that our preferences would converge even given infinite time. That's an assumption, not a theorem.

When buying pasta sauce, I have multiple incommensurable values: money, health, and taste. And in general, when you have multiple criteria, there's no non-paradoxical way to do rankings. (This is basically Arrow's theorem). And I suspect that's the cause for my lack of preference ordering.

Comment author: Lumifer 14 March 2014 04:05:42PM *  0 points [-]

"feels like" is a notoriously bad criterion :-)

Before we even get to VNM axioms I would like to point out that humans do not operate in a VNM setting where a single consequentialist entity is faced with a sequence of lotteries and is able to express his preferences as one-dimensional rankings.

Haven't there been a lot of discussion about the applicability of VNM to human ethical systems? It looks like a well-trodden ground to me.

Comment author: DanielLC 14 March 2014 11:38:32PM 0 points [-]

Before we even get to VNM axioms I would like to point out that humans do not operate in a VNM setting where a single consequentialist entity is faced with a sequence of lotteries and is able to express his preferences as one-dimensional rankings.

He doesn't express the entire ranking, but he does still have to choose the best option.

Comment author: Squark 14 March 2014 06:52:51PM -1 points [-]

"feels like" is a notoriously bad criterion :-)

What would be a good criterion? You cannot pull yourself up by your bootstraps. You need to start from something.

Before we even get to VNM axioms I would like to point out that humans do not operate in a VNM setting where a single consequentialist entity is faced with a sequence of lotteries and is able to express his preferences as one-dimensional rankings.

How would you want to operate? You mentioned instrumental rationality. I don't know how to define instrumental rationality without the VNM setting (or something similar).

Comment author: Lumifer 14 March 2014 07:52:29PM 1 point [-]

What would be a good criterion?

Mismatch with reality.

I don't know how to define instrumental rationality without the VNM setting (or something similar)

Well, the locally canonical definition is this:

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

I see nothing about VNM there.

Comment author: Squark 14 March 2014 08:02:28PM -1 points [-]

Mismatch with reality.

I'm not following

Well, the locally canonical definition is this...

This is a nice motto, but how do you make a mathematical model out of it?

Comment author: Lumifer 14 March 2014 08:15:00PM *  0 points [-]

I'm not following

Well, you originally said " violating them strongly feels like making an error. " I said that "feels like" is a weak point. You asked for an alternative. I suggested mismatch with reality. As in "violating X leads to results which do not agree with what we know of reality".

This is a nice motto, but how do you make a mathematical model out of it?

We were talking about how would a human qualify as a "rational agent". I see no need to make mathematical models here.

Comment author: Bobertron 14 March 2014 01:15:32PM 1 point [-]

Ergo, if you're risk-averse, you aren't a rational agent. Is that correct?

Comment author: Squark 14 March 2014 01:35:17PM 3 points [-]

Depends how you define "risk averse". When utility is computed in terms on another parameter, diminishing returns result in what appears like "risk averseness". For example, suppose that you assign utility 1u to having 1000$, utility 3u to having 4000$ and utility 4u to having 10000$. Then, if you currently have 4000$ and someone offers you to participate in a lottery in which you have a 50% chance of losing 3000$ and a 50% chance of gaining 6000$, you will reject it (in spite of an expected gain of 1500$) since your expected utility for not participating is 3u whereas your expected utility for participating is 2.5u.

Comment author: DanielLC 14 March 2014 05:51:28PM -1 points [-]

There are serious problems with not using expected utility, but even if you still decide to be risk-averse, this doesn't change the conclusion that you should only donate to one charity.