So does spreading rationality contribute to Effective Altruism? I certainly think so, as a rationality popularizer and an Effective Altruist myself. My own donations of money and time is focused on my project, Intentional Insights, of trying to spread rationality to a broad audience and thus raise the sanity waterline, including about effective, evidence-based philanthropy. Specifically in relation to EA, in blogs for Intentional Insights, and in our resources page, I make sure to highlight EA as an awesome thing to get involved in.

I'd particularly appreciate feedback on a draft fundraising letter (link here) for Effective Altruists on the way that Intentional Insights contributes to improving the world and specifically by getting people more engaged with Effective Altruism. I'd like to hear any thoughts on how I can optimize the letter to make it more effective. You can simply respond in comments, or send an email to gleb@intentionalinsights.org

I'd also like to hear your opinion of the broader issue of how spreading rationality helps contribute to improving the world and the EA movement in particular. Let me share my take. For the first, I think that, as shown by Brian Tomasik in this essay, increasing rational thinking is robustly positive for a broad range of short and long term future outcomes, and thus our broader work contributes to improving people’s lives overall. For the second, getting people to think rationally about themselves and their interactions with the world and use evidence-based means to evaluate reality and make their decisions will result in people applying these methods of thinking to their altruism.

What do you think?


 

New Comment
26 comments, sorted by Click to highlight new comments since:

So does spreading rationality contribute to Effective Altruism? I certainly think so

That's a personal anecdote, not data. Do you have evidence?

getting people to think rationally about themselves and their interactions with the world and use evidence-based means to evaluate reality and make their decisions will result in people applying these methods of thinking to their altruism.

It might, but the consequences are not obvious :-/ For example, I can easily see someone who, on realizing that his past charitable giving was all about sending signals to his social circle, decides just to stop doing it and spend the money on, say, self-learning resources.

Altruism is, basically, a value and rationality does not tell you which values you should have.

[-]Elo30

Altruism is, basically, a value and rationality does not tell you which values you should have.

I think the clearer question might be - Does rationality lead to effective altruism?

An easy answer might be rationality applied to altruism might lead to effective altruism; but rationality applied to life might not lead to altruism - and might lead away from it.

An easy answer might be rationality applied to altruism might lead to effective altruism

I am not sure even about that -- the version of EA popular around here is quite utilitarinistic, if there's such a word, and tends to assume things (like the value of random humans somewhere in Africa) which do not directly follow from either rationality or altruism.

[-]Elo00

I certainly understand the questionable value of random humans.

However; If you assume the fixed presence of altruism (and rationality isn't going to consider doing something else as mentioned - i.e. self-learning resources). I still think that altruism can have a rationality applied to it and doing so would lead to at least a consideration of more effectiveness of altruism and only possibly to most effectiveness as the EA movement promote.

When applied: Giving $5 to the homeless person nearby may feel like an altruistic act, when questioned; giving $5 worth of food might be a more altruistically helpful act to the wellbeing of the person. (although debatably giving money might help more, and also giving clothes might help more depending on the situation, etc, etc...)

There is a place for rationality applied to altruism, but rationality applied to life may not yield altruism.

On a simple level - rationality being an achievement of "multiplicit winning at life"; a person's definition of winning at their life may not include helping others or altruistic purposes/processes.

Is that a fair assessment?

There is a place for rationality applied to altruism, but rationality applied to life may not yield altruism.

Yes, I think it's a fair statement.

[-][anonymous]10

For example, I can easily see someone who, on realizing that his past charitable giving was all about sending signals to his social circle, decides just to stop doing it and spend the money on, say, self-learning resources.

Yes this about sounds like me and most people I know, except that we were not giving at all to begin with, but if we did, it would have been signalling.

I don't 100% understand the usual emotional reasons beyond charitable giving. I do get that in the US or parts of it, social pressure and status plays a role.

But probably a more scalable basis is feeling you have a surplus. To scale it up and out, to different cultures etc. you need to convince people they have a surplus they don't need.

One thing that would really help there is if it was really true.

Alternatively, low unit of measure charities. I.e. those who can do something useful with a unit as little as 5-10 dollars/euros. So that contributors feel they did not just add to the sum that makes something happen but they personally did a person something good.

I do get that in the US or parts of it, social pressure and status plays a role.

Connect the dots to the widespread I-truly-believe Christianity in the US. A LOT of charitable giving in the US is driven by religion.

[-][anonymous]10

Yes, but at LW it is remarkable that a bunch of atheists did not even ask whether one should be altruist at all but went straight to how to do it effectively. It went without saying that you are still an altruist. Hypothesis:feeling of surplus.

If you give a nerd a sufficiently interesting optimization problem, in any domain, he will start trying to figure out how to optimize it without asking if that this the right thing to optimize. This is a special case of nerd sniping.

Regard whether spreading rationality contributes to EA directly, I do not have evidence in the sense of data, only in the sense of logic. From a probabilistic thinking perspective, getting people to think more rationally, in an evidence-based manner, about their charitable giving would be likely to lead them to give more effectively, and the EA movement is the best outlet for such charitable giving. I agree there is a danger of the kind you describe about social circle signaling, but we can't be sure without actually testing this with experiments and getting actual evidence about what the world looks like.

Furthermore, as Brian Tomasik describes in his essay, getting people to think more rationally would result in people having better lives overall and flourishing, which is the point of the EA movement as a whole.

So spreading rationality would certainly contribute to the outcome desired by the EA movement, of global flourishing and well-being. I'd also say it would be likely to contribute to the EA movement in particular itself, per my first statement above, but that's a matter of experimenting.

I'm not entirely sure who the audience of this letter is (I'm given to understand "effective altruists" is a pretty heterogeneous group). This affects how your letter should look so much that I can't give much object-level feedback. For instance, it matters how much of your audience has pre-existing familiarity with things like raising the sanity waterline and rationality as a common interest across causes; if most of them lack this familiarity, I expect they'll read your first sentence, be unable to bridge an inferential gap, and stop reading.

Ideally, I'd like to know how exactly this letter is getting to its recipients: are you posting on EA forum or mailing it to anyone who's donated to GiveWell?

The letter would be passed to people involved in the EA movement interested in knowing about Intentional Insights and our efforts to spread rationality, so they would be heterogeneous but more rationality-oriented than most. But I think you're right about the inferential gap, I'll need to work on rewording that section, thank you!

If by "spreading rationality" you mean spreading LW material and ideas, then a potential problem is that it causes many people to donate their money to AI friendliness research instead of to malaria nets. Although these people consider this to be "effective altruism", as an AI skeptic it's not clear to me that this is significantly more effective than, say, donating money to cancer research (as non-EA people often do).

My goal is convincing people to have more clear and rational, evidence-thinking, as informed by LW materials. Some people may choose to donate to AI, and others to EA - as you can see from the blog I cited, I specifically highlight the benefits of the EA movement. Regardless, as Brian Tomasik points out, helping people be more rational contributes to improving the world, and thus the ultimate goal of the EA movement.

[-][anonymous]20

Regardless, as Brian Tomasik points out, helping people be more rational contributes to improving the world, and thus the ultimate goal of the EA movement.

I agree that increasing rationality would improve the world, but would it improve the world more than other efforts? I believe you will face stiff competition from MIRI for effective altruist’s charitable donations. From the Brian Tomasik essay you referenced…

…because AI is likely to control the future of Earth’s light cone absent a catastrophe before then, ultimately all other applications matter through their influence on AI.

Separately…

Is encouraging philosophical reflection in general plausibly competitive with more direct work to explore the philosophical consequences of AI? My guess is that direct work like MIRI’s is more important per dollar.

Why should I support Intentional Insights instead of MIRI? I'm sure I won't be the only potential donor to ask this question, so I recommend that you craft a solid response.

Excellent, thank you for the feedback on what to craft! I will think about this further, and appreciate the ideas!

My goal is convincing people to have more clear and rational, evidence-thinking, as informed by LW materials.

Is there an objective measure by which LW materials inform more "clear and rational" thought? Can you define "clear and rational"? Or actually, to use LW terminology, can you taboo "clear" and "rational" and restate your point?

Regardless, as Brian Tomasik points out, helping people be more rational contributes to improving the world, and thus the ultimate goal of the EA movement.

But does it contribute to improving the world in an effective way?

Well, I'd say that "clear and rational" is the same as "arriving at the correct answer to make the best decision to refine and achieve goals." So yes, I would say it does contribute to improving the world in an effective way, because helping people both understand their goals better (refine goals) and then achieve their goals helps people have better lives and thus improves flourishing.

Do you have any evidence that LW materials help people refine and achieve their goals?

Helping people refine and achieve their goals is pretty damn difficult: school boards, psychiatrists, and welfare programs have been trying to do this for decades. For example, are you saying that teaching LW material in schools will improve student outcomes? I would bet very strongly against such a prediction.

There's actually quite a bit of evidence on how helping students refine and achieve their goals helps them learn better, for example here.

There's also quite a bit of reason to be skeptical of that evidence. Here's slatestarcodex's take: http://slatestarcodex.com/2015/03/11/too-good-to-be-true/

Yup, I'm aware of Scott's dislike of the growth mindset hypothesis, he's a bit on the extreme spectrum on that one. However, even in the post itself, he notes that there are several studies that show the benefits of teaching students to be goal oriented. There's lots of research out there that teaching students metacognition is helpful, for example this chapter cites a lot of studies. I'd say that overall the probabilistic evidence supports the hypothesis that teaching people to be goal oriented and self-reflective about their ways of achieving their goals will help them have better results in achieving those goals.

Okay, let's suppose for a second that I buy that teaching students to be goal oriented helps them significantly. That still leaves quite a few questions:

  1. Many school boards already try to teach students to be goal oriented. Certainly "list out realistic goals" was said to me countless times in my own schooling. What do you plan to do differently?

  2. There seems to be no evidence at all that LW material is better for life outcomes than any other self-help program, and some evidence that it's worse. Consider this post (again by Scott): http://lesswrong.com/lw/9p/extreme_rationality_its_not_that_great/

I plan to teach students actually how to be goal oriented. It's the difference between telling people "lose weight" and specifically giving them clear instructions for how to do is. Here is an example of how I do so in a videotaped workshop.

I would like to have an experimental attitude to LW content, and will look forward to see the results of my experiments. I don't intend to do the extreme rationality stuff, and expect more of it than it can deliver. We'll see, I guess :-)