private_messaging comments on Bayesian Adjustment Does Not Defeat Existential Risk Charity - Less Wrong

43 Post author: steven0461 17 March 2013 08:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread.

Comment author: private_messaging 15 March 2013 06:25:33AM 2 points [-]

The much bigger issue is that for some anthropogenic risk (such as AI), the risk is caused by people, and can be increased by funding some groups of people. The expected utility thus has both positive and negative terms, and if you generate a biased list (e.g. by listening to what organization says about itself), and sum it, the resulting sum tells you nothing about the sign of expected utility.

Comment author: wedrifid 15 March 2013 06:40:03AM *  1 point [-]

The expected utility thus has both positive and negative terms, and if you generate a biased list (e.g. by listening to what organization says about itself), and sum it, the resulting sum tells you nothing about the sign of expected utility.

It tells you something about the sign of the expected utility. It is still evidence. Sometimes it could even be evidence in favor of the expected utility being negative.

Comment author: private_messaging 15 March 2013 07:37:51AM 0 points [-]

Given other knowledge, yes.

Comment author: steven0461 16 March 2013 01:38:55AM *  1 point [-]

I agree: the argument given here doesn't address whether existential risk charities are likely to be helpful or actively harmful. The fourth paragraph of the conclusion and various caveats like "basically competent" were meant to limit the scope of the discussion to only those whose effects were mostly positive rather than negative. Carl Shulman suggested in a feedback comment that one could set up an explicit model where one multiplies (1) a normal variable centered on zero, or with substantial mass below zero, intended to describe uncertainty about whether the charity has mostly positive or mostly negative effects, with (2) a thicker-tailed and always positive variable describing uncertainty about the scale the charity is operating on.

Comment author: private_messaging 16 March 2013 07:14:43AM *  -2 points [-]

"Basically" sounds like quite an understatement. It is not just an anthropogenic catastrophe, it's highly-competent-and-dedicated-people-screwing-up-spectacularly-in-a-way-nobody-wants catastrophe. One could naively think that funding more safety conscious efforts can't hurt but this is problematic when the concern with safety is not statistically independent of the unsafety of the approach that's deemed viable or pursued.