You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

AlexSchell comments on Should we go all in on existential risk? - Considering Effective Altruism - Less Wrong Discussion

4 Post author: Capla 10 November 2014 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread.

Comment author: AlexSchell 11 November 2014 04:03:04PM 5 points [-]

Robin is correct. Here is an accessible explanation. Suppose you first give $1 to MIRI because you believe MIRI is the charity with the highest marginal utility in donations right now. The only reason you would then give the next $1 in your charity budget to anyone other than MIRI would be that MIRI is no longer the highest marginal utility charity. In other words, you'd have to believe that your first donation made a dent into the FAI problem, and hence lowered the marginal utility of a MIRI dollar by enough to make another charity come out on top. But your individual contributions can't make any such dent.

Some sensible reasons for splitting donations involve donations at different times (changes in room for more funding, etc.) and donations that are highly correlated with many other people's donations (e.g. the people giving to GiveWell top charities) and might therefore actually make dents.

Comment author: Lumifer 11 November 2014 06:08:30PM 2 points [-]

Suppose you first give $1 to MIRI because you believe MIRI is the charity with the highest marginal utility in donations right now. The only reason you would then give the next $1 in your charity budget to anyone other than MIRI would be that MIRI is no longer the highest marginal utility charity.

You're assuming you're certain about your estimates of the charities' marginal utility. If you're uncertain about them, things change.

Compare this to investing in financial markets. Why don't you invest all your money in a single asset with the highest return? Because you're uncertain about returns and diversification is a useful thing to manage your risk.

Comment author: AlexSchell 11 November 2014 11:08:55PM 3 points [-]

No, I'm not relying on that assumption, though I admit I was not clear about this. The argument goes through perfectly well if we consider expected marginal utilities.

Investors are risk-averse because not-too-unlikely scenarios can affect your wealth sufficiently enough to make the concavity of your utility function over wealth matter. For FAI or world poverty, none of your donations at a given time will make enough of a dent.

I think the countervailing intuition comes from two sources: 1) Even when instructed about the definition of utility, certainty equivalents of gambles, and so on, people have a persistent intuition that utility has declining marginal utility. 2) We care not only about poor people being made better off (where our donations can't make a dent) but also about creating a feeling of moral satisfaction within ourselves (where donations to a particular cause can satiate that feeling, leading us to want to help some other folks, or cute puppies).

Comment author: Lumifer 12 November 2014 01:36:02AM *  0 points [-]

Investors are risk-averse because not-too-unlikely scenarios can affect your wealth sufficiently enough to make the concavity of your utility function over wealth matter.

You are wrong about this. See e.g. here or in a slightly longer version here.

But let's see how does your intuition work. Charity A is an established organization with well-known business practices and for years it steadily has been generating about 1 QALY for $1. Charity B is a newcomer that no one really knows much about. As far as you can tell, $1 given to it has a 1.1% chance to generate 100 QALYs or be wasted otherwise, but you're not sure about these numbers, they are just a low-credence guess. To whom do you donate?

Comment author: AlexSchell 12 November 2014 07:02:35PM 0 points [-]

Adding imprecise probability (a 1.1% credence that I'm not sure of) takes us a bit afield, I think. Imprecise probability doesn't have an established decision theory in the way probability has expected utility theory. But that aside, assuming that I'm calibrated in the 1% range and good at introspection and my introspection really tells me that my expected QALY/$ for charity B is 1.1, I'll donate to charity B. I don't know how else to make this decision. I'm curious to hear how much meta-confidence/precision you need for that 1.1% chance for you to switch from A to B (or go "all in" on B). If not even full precision (e.g. the outcome being tied to a RNG) is enough for you, then you're maximizing something other than expected QALYs.

(I agree with Gelman that risk-aversion estimates from undergraduates don't make any financial sense. Neither do estimates of their time preference. That just means that people compartmentalize or outsource financial decisions where the stakes are actually high.)

Comment author: Lumifer 12 November 2014 07:13:43PM *  0 points [-]

takes us a bit afield, I think

If you're truly risk neutral you would discount all uncertainty to zero, the expected value is all that you'd care about.

my introspection really tells me that my expected QALY/$ for charity B is 1.1

You introspection tells you that you're uncertain. Your best guess is 1.1 but it's just a guess. The uncertainty is very high.

I don't know how else to make this decision.

Oh, there are plenty of ways, just look at finance. Here's a possible starting point.

I agree with Gelman that risk-aversion estimates from undergraduates don't make any financial sense.

Gelman's point has nothing to do with whether undergrads have any financial sense or not. Gelman's point is that treating risk aversion as solely a function of the curvature of the utility function makes no sense whatsoever -- for all humans.

Comment author: AlexSchell 13 November 2014 05:18:08AM *  0 points [-]

Let me try to refocus a bit. You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values. If this is not so, your points are plainly inconsistent with expected utility maximization, assuming that your utility is roughly linear in QALYs in the range you can affect. If you are appealing to imprecise probability, what I alluded to by "I have no idea" is that there are no generally accepted theories (certainly not "plenty") for decision making with imprecise credence. It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion. None of these concepts make any essential use of imprecise probability in the way your example does.

Comment author: Lumifer 13 November 2014 04:48:52PM 0 points [-]

You seem to want to describe a situation where I have uncertainty about probabilities, and hence uncertainty about expected values.

Correct.

there are no generally accepted theories (certainly not "plenty") for decision making with imprecise credence.

Really? Keep in mind that in reality people make decisions on the basis of "imprecise probabilities" all the time. In fact, outside of controlled experiments, it's quite unusual to know the precise probability because real-life processes are, generally speaking, not that stable.

It is very misleading to invoke diversification, risk premia, etc. as analogous or applicable to this discussion.

On the contrary, I believe it's very illuminating to apply these concepts to the topic under discussion.

I did mention finance which is a useful example because it's a field where people deal with imprecise probabilities all the time and the outcomes of their decisions are both very clear and very motivating. You don't imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?

Comment author: AlexSchell 16 November 2014 01:07:10AM 0 points [-]

You don't imagine that when someone, say, characterizes a financial asset as having the expected return of 5% with 20% volatility, these probabilities are precise, do you?

Those are not even probabilities at all.

Comment author: Lumifer 16 November 2014 07:57:32PM 0 points [-]

Such an expression usually implies a normal probability distribution with the given mean and standard deviation. How do you understand probabilities as applied to continuous variables?

Comment author: AlexSchell 16 November 2014 01:05:04AM *  0 points [-]

There are two very different sorts of scenarios with something like "imprecise probabilities".

The first sort of case involves uncertainty about a probability-like parameter of a physical system such as a biased coin. In a sense, you're uncertain about "the probability that the coin will come up heads" because you have uncertainty about the bias parameter. But when you consider your subjective credence about the event "the next toss will come up heads", and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.

In the second sort of case, your very subjective credences are uncertain. On the usual definition of subjective probabilities in terms of betting odds this is nonsense, but maybe it makes some sense for boundedly introspective humans. Approximately none of the decision theory corpus applies to this case, because it all assumes that credences and expected values are constants known to the agent. Some decision rules for imprecise credence have been proposed, but my understanding is that they're all problematic (this paper surveys some of the problems). So decision theory with imprecise credence is currently unsolved.

Examples of the first sort are what gives talk about "uncertain probabilities" its air of reasonableness, but only the second case might justify deviations from expected utility maximization. I shall have to write a post about the distinction.

Comment author: Lumifer 16 November 2014 08:01:33PM *  0 points [-]

But when you consider your subjective credence about the event "the next toss will come up heads", and integrate the conditional probabilities over the range of parameter values, what you end up with is a constant. No uncertainty.

Really? You can estimate your subjective credence without any uncertainty at all? You integration of the conditional probabilities over the range of parameter values involves only numbers you are fully certain about?

I don't believe you.

Approximately none of the decision theory corpus applies to this case

So this decision theory corpus is crippled and not very useful. Why should we care much about it?

So decision theory with imprecise credence is currently unsolved.

Yes, of course, but life in general is "unsolved" and you need to make decisions on a daily basis, not waiting for a proper decision theory to mature.

I think you overestimate the degree to which abstractions are useful when applied to reality.

Comment author: peter_hurford 11 November 2014 06:51:23PM 6 points [-]

diversification is a useful thing to manage your risk

But presumably you're risk-neutral to altruism, but not risk-neutral for your own personal finances.

Comment author: Lumifer 11 November 2014 06:55:07PM 1 point [-]

I don't see being risk-neutral with respect to altruism as obvious. If it turns out that you misallocated your charity dollars, you have incurred opportunity costs. In general, people are nor risk-neutral with respect to things they care about.

Comment author: Nornagest 11 November 2014 06:52:50PM *  1 point [-]

Well, you're probably less risk averse with regard to altruism. I imagine most people would still be upset to see the charity they've been donating to for years go under.