AstroCJ comments on On Charities and Linear Utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (58)
If everyone was to take Landsburg's argument seriously, which would imply that all humans were rational, then everyone would solely donate to the SIAI. If everyone only donated to the SIAI, would something like Wikipedia even exist? I suppose the SIAI would have created Wikipedia if it was necessary. I'm just wondering how much important stuff out there was spawned by irrational contributions and how the world would look like if such contributions would have never been made. I'm also not sure how venture capitalist growth funding differs from the idea to diversify one's contributions to charity.
Note that I do not doubt the correctness of Landsburg's math. I'm just not sure if it would have worked out given human shortcomings (even if everyone was maximally rational). If nobody was to diversify, contributing to what seems to be the most rational option given the current data, then being wrong would be a catastrophe. Even maximally rational humans can fail after all. This wouldn't likely be a problem if everyone contributed to a goal that could be verified rather quickly, but something like the SIAI could eat up the resources of the planet and still turn out to be not even wrong in the end. Since everyone would have concentrated on that one goal (no doubt being the most rational choice at the moment), might such a counterfactual world have been better off diversifying its contributions or would the SIAI have turned into some kind of financial management allocating those contributions and subsequently become itself a venture capitalist?
Downvoted.
For games where there are multiple agents interacting, the optimal strategy will usually involve some degree of weighted randomness. If there are noncommunicating rational agents A, B, C each with (an unsplittable) $1, and charities 1 and 2 - both of which fulfil a vital function but 1 requires $2 to function and 2 requires $1 to function, I would expect the agents to donate to 1 with p = 2/3.
A rational agent is aware that other rational agents exist, and will take account of their actions.