wedrifid comments on Bayesian Adjustment Does Not Defeat Existential Risk Charity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (89)
The much bigger issue is that for some anthropogenic risk (such as AI), the risk is caused by people, and can be increased by funding some groups of people. The expected utility thus has both positive and negative terms, and if you generate a biased list (e.g. by listening to what organization says about itself), and sum it, the resulting sum tells you nothing about the sign of expected utility.
It tells you something about the sign of the expected utility. It is still evidence. Sometimes it could even be evidence in favor of the expected utility being negative.
Given other knowledge, yes.