You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Capla comments on Should we go all in on existential risk? - Considering Effective Altruism - Less Wrong Discussion

4 Post author: Capla 10 November 2014 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: Capla 11 November 2014 12:27:37AM 0 points [-]

Well, the marginal impact of a life-not-saved on the probability of a p-sing (can I call it that? What I really want is a convenient short-hand for "tiny incremental increase in the probability of a positive singularity.") probably goes down as more we put more effort into achieving a p-sing, but not significantly for the problem. The law of diminishing marginal returns gets you every time.

Let's not get to caught up in the numbers (which I do think are useful for considering a real trade-off). I don't know how likely a p-sing is, nor how much my efforts can contribute to one. I am interested in analysis of this question, but I don't think we can have high confidence in an prediction that goes out 20 years or more, especially if the situation requires the introduction of such world-shaping technologies as would lead up to a singularity. If everyone acts as I do, but we're massively wrong about how much impact our efforts have (which is likely), then we all waste enormous effort on nothing.

Comment author: G0W51 19 May 2015 09:36:28PM 1 point [-]

Given that you are only one individual, the increase in the chance of a p-sing for each unit of money you give is roughly linear, so diminishing marginal returns shouldn't be an issue.