You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MarsColony_in10years comments on Effective Altruism from XYZ perspective - Less Wrong Discussion

4 Post author: Clarity 08 July 2015 04:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

You are viewing a single comment's thread. Show more comments above.

Comment author: Telofy 12 July 2015 09:08:02AM 1 point [-]

As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.

which values people based on their contributions, not just their needs

VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”

I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments

I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.

However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.

Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.

Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.

From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.

You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.

However, the alternative might also be:

keeping your money in your piggy bank until more obvious opportunities emerge

That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.

However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.

Comment author: MarsColony_in10years 13 July 2015 02:28:25AM *  0 points [-]

Trading off these element in an expected value framework ... is probably again a rather personal decision

If you aren't aware of the relevant decision theory, then I have good news for you!

I'm not sure this is true, at least in the narrow instance of rationalists trying to make maximally effective decisions based on well defined uncertainties. In principle, at least, it should be possible to calculate the value of information. Decision theory has a concept called the expected value of perfect information. If you're not 100% sure of something, but the cost of obtaining information is high (which it generally is in philosophy, as evidenced by the somewhat slow progress over the centuries.) and giving opportunities are shrinking (which they are for many areas, as conditions improve) then you probably want to risk giving sub-optimally by giving now vs later. The price of information is simply higher than the expected value.

Unfortunately, you might still need to make a judgement call to guesstimate the values to plug in.

Comment author: Telofy 14 July 2015 11:46:38AM 0 points [-]

Thanks! I hadn’t seen the formulae for the expected value of perfect information before. I haven’t taken the time to think them through yet, but maybe they’ll come in handy at some point.