multifoliaterose

Wiki Contributions

Comments

Sorted by

However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.

What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?

The occasional contrarians who mount fundamental criticism do this with a tacit understanding that they've destroyed their career prospects in the academia and closely connected institutions, and they are safely ignored or laughed off as crackpots by the mainstream. (To give a concrete example, large parts of economics clearly fit this description.)

I don't find this example concrete. I know very little about economics ideology. Can you give more specific examples?

It seems almost certain that nuclear winter is not an existential risk in and of itself but it could precipitate a civilizational collapse from which it's impossible to recover (e.g. because we've already depleted too much of the low hanging natural resource supply). This seems quite unlikely, maybe the chance conditional on nuclear winter is between 1 and 10 percent. Given that governments already consider nuclear war to be a national security threat and that the probability seems much lower than x-risk due to future technologies it seems best to focus on other things. Even if nothing direct can be done about x-risk from future technologies, movement building seems better than nuclear risk reduction.

So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.

There's another option which I think may be better for some people (but I don't know because it hasn't been much explored). One can stagger one's donations over time (say, on a quarterly basis) and alter the amount that one gives according to how one feels about donating based on the feeling of past donations. It seems like this may maximize the amount that one gives locally subject to the constraint of avoiding moral burnout.

If one feels uncomfortable with the amount that one is donating because it's interfering with one's lifestyle one can taper off. On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment. Buying a new garment that one realistically isn't going to wear or a book that one realistically isn't going to read feels good, but probably not as good as donating. This is a pressure toward donating more.

Cue: Non-contingency of my arguments (such that the same argument could be applied to argue for conclusions which I disagree with).

Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]"

It's probably generically the case that the likelihood of rationalization increases with the contextual cue of a slight. But one usually isn't aware of this in real time.

I find this comment vague and abstract, do you have examples in mind?

GiveWell itself (it directs multiple dollars to its top charities on the dollar invested, as far as I can see, and powers the growth of an effective philanthropy movement with broader implications).

There's an issue of room for more funding.

Some research in the model of Poverty Action Lab.

What information do we have from Poverty Action Lab that we wouldn't have otherwise? (This is not intended as a rhetorical question; I don't know much about what Poverty Action Lab has done).

A portfolio of somewhat outre endeavours like Paul Romer's Charter Cities.

Even in the face of the possibility of such endeavors systematically doing more harm than good due to culture clash?

Political lobbying for AMF-style interventions (Gates cites his lobbying expenditures as among their very best), carefully optimized as expected-value charity rather than tribalism using GiveWell-style empiricism, with the collective action problems of politics offsetting the reduced efficiency and corruption of the government route

Here too maybe there's an issue of room for more funding: if there's room for more funding then why does the Gates Foundation spend money on many other things?

Putting money in a Donor-Advised Fund to await the discovery of more effective charities, or special time-sensitive circumstances demanding funds especially strongly

What would the criterion for using the money be? (If one doesn't have such a criterion then one forever holds off on a better opportunity and this has zero expected value.)

Saying that something is 'obvious' can provide useful information to the listener of the form "If you think about this for a few minutes you'll see why this is true; this stands in contrast with some of the things that I'm talking about today." Or even "though you may not understand why this is true, for experts who are deeply immersed in this theory this part appears to be straightforward."

I personal wish that textbooks more often highlighted the essential points over those theorems that follow from a standard method that the reader is probably familiar with.

But here I really have in mind graduate / research level math where there's widespread understanding that a high percentage of the time people are unable to follow someone who believes his or her work to be intelligible and so who have a prior against such remarks being intended as a slight. It seems like a bad communication strategy for communicating with people who are not in such a niche.

Do you know of anyone who tried and quit?

No, I don't. This thread touches on important issues which warrant fuller discussion; I'll mull them over and might post more detailed thoughts under the discussion board later on.

Load More