multifoliaterose comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong

75 Post author: HoldenKarnofsky 18 August 2011 11:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 19 August 2011 08:36:49PM 3 points [-]

I'm not sure this is true, sceptical inquiry can have a high expected value when it helps you work out what is a better use of limited resources. [...]

Note that Holden qualified his statement with "(too often)".

I think that in a case with an action that has low probability of producing a large gain, any investigation that will confirm whether this is true or not is worth attempting unless either [...] It seems to me that in both of these cases it would be pretty obviously stupid to have a sceptical enquiry.

Concerning your second point: suppose that spending a million dollars on intervention A ostensibly has an expected value X which is many orders of magnitude greater than that of any intervention (and suppose for simplicity negligible diminishing marginal utility per dollar). Suppose that it would cost $100,000 to investigate whether the ostensibly high expected value is well-grounded.

Then investigating the cost-effectiveness of intervention A comes at an ostensible opportunity cost of X/10. But it's ostensibly the case that the remaining $900,000 could in no case be spent with cost-effectiveness within an order of magnitude then spending the money on intervention A. So in the setting that I've just described, the opportunity cost of investigating is ostensibly too high to justify an investigation.

Note that a similar situation could prevail even if investigating the intervention cost only $100 or $10 provided that the ostensible expected value X is sufficiently high relative to other known options.

The point that I'm driving at here is that there's not a binary "can afford" or "can't afford" distinction concerning the possibility of funding A: it can easily happen that spending any resources whatsoever investigating A is ostensibly too costly to be worthwhile. This conclusion is counter-intuitive; seemingly very similar to Pascal Mugging.

The fact that naive EEV leads to this conclusion is evidence against the value of naive EEV. Of course, one can attempt to use a more sophisticated version of EEV; see the second and third paragraphs of Carl Shulman's comment here.

Why do you believe this. Do you have any evidence or even arguments? It seems pretty unintuitive to me that the sum of a bunch of actions, each of which increases total welfare, could somehow be a decrease in total welfare.

See my fourth point in the section titled "In favor of a local approach to philanthropy" here.

When you say taken to the extreme, I suspect you are imagining our hypothetical EEV agents ignoring various side-effects of their actions, in which case the problem is with them failing to take all factors into account, rather than with them using EEV.

It's not humanly possible to take all factors into account; our brains aren't designed to do so. Given how the human brain is structured, using implicit knowledge which is inexplicable can yield better decision making for humans than using explicit knowledge. This is the point of the section of Holden's post titled "Generalizing the Bayesian approach."

Not true. If all donors followed EEV, charities would indeed have an incentive to conceal information about things they are doing badly, and donors would in turn, and in accordance with EEV, start to treat failure to disclose information as evidence that the information was unflattering. This would in turn incentivise charities to disclose information about things they are doing slightly badly, which would in turn cause donors to view secrecy in an even worse light, and so on. I we eventually reach an equilibrium where charities disclose all information.

I think you're right about this.

Of course, this assumes that all charities and all donors are completely rational, which is a total fantasy, but I think the same can be said of your own argument, and even if we do end up stuck part-way to equilibrium with charities keeping some information secret, as donors we can just take that information into account and correctly treat it as Bayesian evidence of a problem.

My intuition is that in the real world the incentive effects of using EEV would in fact be bad despite the point that you raise; but refining and articulating my intuition here would take some time and in any case is oblique to the primary matters under consideration.

Comment author: benelliott 19 August 2011 09:58:10PM 1 point [-]

Note that Holden qualified his statement with "(too often)".

And the point which I was making was that EEV does not do this too often, it does it just often enough, which I think is pretty clear mathematically.

Then investigating the cost-effectiveness of intervention A comes at an ostensible opportunity cost of X/10. But it's ostensibly the case that the remaining $900,000 could in no case be spent with cost-effectiveness within an order of magnitude then spending the money on intervention A. So in the setting that I've just described, the opportunity cost of investigating is ostensibly too high to justify an investigation.

I don't see what you're driving at with the opportunity cost of X/10. Either we have less than $1,100,000 in which case the opportunity cost is X or we have more than $1,100,000 in which case it is zero. Either we can do X or we can't, we can't do part of it or more of it.

The fact that naive EEV leads to this conclusion is evidence against the value of naive EEV. Of course, one can attempt to use a more sophisticated version of EEV; see the second and third paragraphs of Carl Shulman's comment here.

If naive EEV causes problems then the problem is with naivete, not with EEV. Any decision procedure can lead to stupid actions if fed with stupid information.

See my fourth point in the section titled "In favor of a local approach to philanthropy" here.

You make the case that local philanthropy is better than global philanthropy on an individual basis, and if you are correct (which I don't think you are) then EEV would choose to engage in local philanthropy.

It's not humanly possible to take all factors into account; our brains aren't designed to do so.

The correct response to our fallibility is not to go do random other things. Just because my best guess might be wrong doesn't mean I should trade it for my second best guess, which is by definition even more likely to be wrong.

implicit knowledge which is inexplicable

A cognitive bias by another name is still a cognitive bias.

My intuition is that in the real world the incentive effects of using EEV would in fact be bad despite the point that you raise; but refining and articulating my intuition here would take some time and in any case is oblique to the primary matters under consideration.

I agree that it isn't very important. Regardless of anything else, the possibility of more than a tiny proportion of donors actually applying EEV is not even remotely on the table.

Comment author: multifoliaterose 20 August 2011 12:20:45AM 0 points [-]

You make the case that local philanthropy is better than global philanthropy on an individual basis, and if you are correct (which I don't think you are) then EEV would choose to engage in local philanthropy.

Note that in the link that you're referring to I argue both for and against local philanthropy as opposed to global philanthropy. Anyway, I wasn't referencing the post as a whole, I was referencing the point about the "act locally" heuristic solving a coordination problem that naive EEV fails to solve. It's not clear that it's humanly possible (or desirable) to derive that that heuristic from first principles. Rather than trying to replace naive EEV with sophisticated EEV; one might be better off with scraping exclusive use of EEV altogether.