You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Capla comments on Should we go all in on existential risk? - Considering Effective Altruism - Less Wrong Discussion

4 Post author: Capla 10 November 2014 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: Capla 11 November 2014 03:53:10PM *  0 points [-]

Should I go all in on either? Basic betting theory says no.

This is my issue. I'm not sure what justification we have for ignoring the theory, assuming we actually want to be maximally helpful. Can you elaborate?

Comment author: Metus 11 November 2014 07:34:53PM 1 point [-]

There is absolutely no justification in ignoring betting theory. It was formulated for turning money into more money, but applies equally well for turning any one cardinal quantity into another cardinal quantity. Some time ago there was an absurdly long article on here about why one should not diversify their donations assuming there is no risk, which made the point moot.

And even if there was no risk, my utility is marginal. I'll donate some to one cause until that desire is satisfied, I'll then donate to another cause until that desire is satisfied and so on. This has the dual benefit of benefiting multiple causes I care about and of hedging against potentially bad metrics like QALY.

Comment author: Capla 11 November 2014 08:15:20PM 1 point [-]

I don't understand.

There is absolutely no justification in ignoring betting theory.

and

And even if there was no risk, my utility is marginal. I'll donate some to one cause until that desire is satisfied, I'll then donate to another cause until that desire is satisfied and so on. This has the dual benefit of benefiting multiple causes I care about and of hedging against potentially bad metrics like QALY.

Aren't these mutually exclusive statements or am I misunderstanding? What is your position?

Comment author: Metus 11 November 2014 08:47:51PM 1 point [-]

What is your position?

Diversify, that is my position.

Aren't these mutually exclusive statements or am I misunderstanding?

Misunderstanding. Assuming risk, we have to diversify. But even when we assume no risk we exhibit marginal utility from any cause, so we should diversify there too, just as you don't put all your money above poverty into any one good.

Comment author: Capla 11 November 2014 09:35:39PM 0 points [-]

But the reason why I don't put all my money into one good (that said, I'm pretty close, after food and rent, its just books, travel, and charity), because my utility function has built in diminishing marginal returns. I don't get as much enjoyment out of doing somthign that I've already been doing a lot. If I am sincerely concerned about the well being of others and effective charity, then there is no significant change in marginal impact per dollar I spend. While it is a fair critique that I may not actually care, I want to care, meaning I have a second-order term on my utility function that is not satisfied unless I am being effective with my altruism.

Comment author: Metus 11 November 2014 11:21:31PM 0 points [-]

If I am sincerely concerned about the well being of others and effective charity, then there is no significant change in marginal impact per dollar I spend.

Oh, you are sincerely concerned? Then of course any contribution you make to any efficient cause like world poverty will be virtually zero relative to the problem and spend away. But personally I can see people go "ten lives saved is good enough, let's spend the rest on booze". Further arguments could be made that it is unfair that only people in Africa get donations but not people in India, or similar.

But that is only the marginal argument knocked down. The risk argument still stands is way stronger anyway.

While it is a fair critique that I may not actually care, I want to care, meaning I have a second-order term on my utility function that is not satisfied unless I am being effective with my altruism.

Signaling, signaling, signaling all the way down.

Comment author: Capla 12 November 2014 01:04:17AM *  0 points [-]

Ok. Fine maybe it's signaling. I'm ok with that since the part of me that does really care thinks "if my desire to signal leads me to help effectively, then it's fine in my book", but then I'm fascinated because that part of me may, actually, be motivated by the my desire to signal my kindness. It may be signalling "all the way down", but it seems to be alternating levels of signaling motivated by altruism motivated by signalling. Maybe it eventually stabilizes at one or the other.

I don't care. Whether I'm doing it out of altruism or doing it for signaling (or, as I personally think, neither, but rather something more complex, involving my choice of personal identity which I suspect uses the neural architecture that was developed for playing status games, but has been generalized to be compared to an abstract ideal instead of other agent), I do want to be maximally effective.

If I know what my goals are, what motivates them is not of great consequence.