lukeprog comments on Another Critique of Effective Altruism - Less Wrong

19 Post author: jsteinhardt 05 January 2014 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (108)

You are viewing a single comment's thread.

Comment author: lukeprog 05 January 2014 05:47:45PM 9 points [-]

The history of effective altruism is littered with over-confident claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many others repeated his claim). While the number was already questionable at the time, by 2011 we discovered that the number was completely off. Now new numbers were thrown around: from numbers still in the hundreds of dollars (GWWC's estimate for SCI, which was later shown to be flawed) up to $1600 (GiveWell's estimate for AMF, which GiveWell itself expected to go up, and which indeed did go up).

Another good example is GiveWell's 2009 estimate that "Because [our] estimate makes so many conservative assumptions, we feel it is overall reasonable to expect [Village Reach's] future activities to result in lives saved for under $1000 each."

Comment author: timtyler 09 January 2014 03:01:20AM 6 points [-]

"8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon"

Comment author: jsteinhardt 11 January 2014 06:29:32AM 1 point [-]

I don't think you should form your opinion of Anna from this video. It gave me an initially very unfavorable impression that I updated away from after a few in-person conversions.

(If you read the other things I write you'll know that I'm nowhere close to a MIRI fanatic so hopefully the testimonial carries some weight.)

Comment author: lukeprog 09 January 2014 03:55:40AM 3 points [-]

Pulling this number out of the video and presenting it by itself, as Kruel does, leaves out important context, such as Anna's statement "Don't trust this calculation too much. [There are] many simplifications and estimated figures. But [then] if the issue might be high stakes, recalculate more carefully." (E.g. after purchasing more information.)

However, Anna next says:

I've talked about [this estimate] with a lot of people and the bargain seems robust. Maybe you go for a soft takeoff scenario, [then the estimate] comes out maybe an order of magnitude lower. But it still comes out [as] unprecedentedly much goodness that you can purchase for a little bit of money or time.

And that is something I definitely disagree with. I don't think the estimate is anywhere near that robust.

Comment author: David_Gerard 12 June 2014 06:11:01PM *  0 points [-]

You are ignoring that the slide being projected as she was saying it emphasises the point - it was being treated as an important point to make.

"It's out of context!" is a weaselly argument, and one that, having watched the video and read the transcript, I really just don't find credible. It's not at all at odds with the context. The context is fully available. Anna made that claim, she emphasised it as a point worth noting beforehand in the slide deck, she apparently meant it at the time. You're attempting to discredit Kruel in general by ad hominem, and doing so in a manner that is simply not robust.

Comment author: [deleted] 12 June 2014 06:48:24PM *  -2 points [-]

I see nowhere the claim that Kruel pretended to quote from that video.

That's clearly a rough estimate of the value of a positive singularity, and MIRI only studies one pathway to it. MIRI donations are not fungible with donations to a positive singularity, which needs to be true for Kruel's misquote to be even roughly equivalent to what Salamon actually said.

Even if we grant that unstated premise, there's her disclaimer that the estimate (of the value of a positive singularity) is important to be written down explicitly (Principle 1 @ 7:15) even if it is inaccurate and cannot be trusted (Principle 2 directly afterward).

Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned; saying people should be extremely skeptical of his claims is not pulling an ad hominem.

Comment author: David_Gerard 13 June 2014 07:26:17AM 0 points [-]

I see nowhere the claim that Kruel pretended to quote from that video.

12:31. "You can divide it up, per half day of time, something like 800 lives. Per $100 of funding, also something like 800 lives." There's a slide up at that moment making the same claim. It wasn't a casual aside, it was a point that was part of the talk.

Kruel has proven himself to be an unreliable narrator wherever MIRI is concerned;

He wasn't in this case, and you haven't shown it in any other case. Do you have a list to hand?

Comment author: AnnaSalamon 09 January 2014 07:28:07PM 7 points [-]

I agree with Luke's comment; compared to my views in 2009, the issue now seems more complicated to me; my estimate of impact form donation re: AI risk is lower (though still high); and I would not say that a particular calculation is robust.

Comment author: satt 11 January 2014 12:16:49PM 2 points [-]

my estimate of impact form donation re: AI risk is lower (though still high)

Out of curiosity, what's your current estimate? I recognize it'll be rough, but even e.g. "more likely than not between $1 and $50 per life saved" would be interesting.

Comment author: V_V 09 January 2014 04:18:32PM 3 points [-]

And that is something I definitely disagree with. I don't think the estimate is anywhere near that robust.

Is this MIRI official position? Because, AFAIK that estimate was never retracted.

Anyway, the problem doesn't seem to be much with the exact numbers, but with the process: what she did was essentially a travesty of a Fermi estimate, where she pulled numbers of out thin air and multiplied them together to get a self-serving result.

This person is "Executive Director and Cofounder" of CFAR. Is this what they teach for $1,000 a day? How to fool yourself by performing a mental ritual with made up numbers?

Comment author: lukeprog 09 January 2014 05:47:39PM *  4 points [-]

Is this MIRI official position? Because, AFAIK that estimate was never retracted.

I don't know what Anna's current view is. (Edit: Anna has now given it.)

In general, there aren't such things as "MIRI official positions," there are just individual persons' opinions at a given time. Asking for MIRI's official position on a research question is like asking for CSAIL's official opinion on AGI timelines. If there are "MIRI official positions," I guess they'd be board-approved policies like our whistleblower policy or something.

Comment author: V_V 10 January 2014 01:56:41PM 0 points [-]

Thanks for the answer