MichaelVassar comments on A critique of effective altruism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (152)
That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it's not clear to me that is possible to do.
What does the person who EA is easy for look like? My first guess is a person who gets warm fuzzies from rigor. But then that suggests they'll overconsume rigor and underconsume altruism.
Is epistemology the real failing, here? This may just be the communism analogy, but I'm not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?
Interesting. The critique you've written strikes me as more "nudging" than "apostasy," and while nudging is probably more effective at improving EA, keeping those concepts separate seems useful. (The rest of this comment is mostly meta-level discussion of nudging vs. apostasy, and can be ignored by anyone interested in just the object-level discussion.)
I interpreted the idea of apostasy along the lines of Avoiding Your Belief's Real Weak Points. Suppose you knew that EA being a good idea was conditional on there being a workable population ethics, and you were uncertain if a workable population ethics existed. Then you would say "well, the real weak spot of EA is population ethics, because if that fails, then the whole edifice comes crashing down." This way, everyone who isn't on board with EA because they're pessimistic about population ethics says "aha, Ben gets it," and possibly people in EA say "hm, maybe we should take the population ethics problem more seriously." This also fits Bostrom's idea- you could tell your past self "look, past Ben, you're not taking this population ethics problem seriously, and if you do, you'll realize that it's impossible and EA is wasted effort." (And maybe another EAer reads your argument and is motivated to find that workable population ethics.)
I think there's a moderately strong argument for sorting beliefs by badness-if-true rather than badness-if-true times plausibility because it's far easier to subconsciously nudge your estimate of plausibility than your estimate of badness-if-true. I want to say there's an article by Yvain or Kaj Sotala somewhere about "I hear criticisms of utilitarianism and think 'oh, that's just uninteresting engineering, someone else will solve that problem' but when I look at other moral theories I think 'but they don't have an answer for X!' and think that sinks their theory, even though its proponents see X as just uninteresting engineering," which seems to me a good example of what differing plausibility assumptions look like in practice. Part of the benefit of this exercise seems to be listing out all of the questions whose answers could actually kill your theory/plan/etc., and then looking at them together and saying "what is the probability that none of these answers go against my theory?"
Now, it probably is the case that the total probability is small. (This is a belief you picked because you hold it strongly and you've thought about it a long time, not one picked at random!) But the probably may be much higher than it seems at first, because you may have dismissed an unpleasant possibility without fully considering it. (It also may be that by seriously considering one of these questions, you're able to adjust EA so that the question no longer has the chance of killing EA.)
As an example, let's switch causes to cryonics. My example of cryonics apostasy is "actually, freezing dead people is probably worthless; we should put all of our effort into making it legal to freeze live people once they get a diagnosis of a terminal condition or a degenerative neurological condition" and my example of cryonics nudging is "we probably ought to have higher fees / do more advertising and outreach." The first is much more painful to hear, and that pain is both what makes it apostasy and what makes it useful to actually consider. If it's true, the sooner you know the better.
I think that this is an effective list of real weak spots. If these problems can't be fixed, EA won't do much good.