Another example of weird charitable impulses-- people contributed a lot (possibly between $700,000 and a million dollars) to a fund for Baby Jessica-- a two year old who'd been trapped in a well for 2 1/2 days.
There were medical consequences, but the money wasn't used for them. It was put into a fund that she won't get until she's 25.
Damn, does SIAI have any kids they can push down a well?
... I can never run for public office.
A more practical and simple (and possibly legal) idea for abusing knowledge of irrational charity: Instead of asking for money to save countless children, ask for money to save one, specific child.
If one circulated a message on the internet saying that donations could save the life of a specific child, obviously if you then used the money for something unrelated there would be laws against that. But if you simply, say, A: lied about why they were in danger of dying, B: overstated the amount of money needed, C: left out the nationality of the child, and D: Used the money to save a large number of children, do you think a court would convict that?
Getting the money towards some cause where the child-saving is a lot less direct, like technological research or SIAI, would probably get hit for lying, but for something like fighting Malaria or the like that might be incredibly useful.
If one circulated a message on the internet saying that donations could save the life of a specific child, obviously if you then used the money for something unrelated there would be laws against that. But if you simply, say, A: lied about why they were in danger of dying, B: overstated the amount of money needed, C: left out the nationality of the child, and D: Used the money to save a large number of children, do you think a court would convict that?
You have just rediscovered the idea, "I know, why not just lie!" On which, see this.
I predict that (a) you would be found out, (b) if it came to court, the court would convict (fraud in a good cause is still fraud), and (c) so would the forum of public opinion.
ETA: See also.
paulfchristiano,
I'm probably one of the people you're trying to reach. I want to help people and I am aware that the approaches favoured by society may well not work. But doing the right thing is really hard.
However many arguments you present to me it's still really hard. For me, it's not a lack of argument that stands in the way of doing the right thing.
What I want is a community of rationalists who are interested in helping others as much as possible. Does such a thing already exist, ready-made? Either as a subset of LW or independent of it?
I can't help feeling that such a thing would help your cause immensely. However good your arguments are, people will want to know "what do I do next?" And they will be much happier with the answer "come meet my friends and throw some ideas around" than "give all your money to SIAI".
However many arguments you present to me it's still really hard. For me, it's not a lack of argument that stands in the way of doing the right thing.
I don't think you are quite the sort of person I'm trying to reach with this effort---you seem to already be in the mental state I wish more people were in.
I have been spending a great deal of time recently thinking about exactly what I should be doing. I agree completely that the problem is extremely hard. I understand that the plan I come up with is going to be far, far from the best possible plan. Here are some thoughts, which are not yet very well formed and need to be articulated much more precisely in the future.
Creating and growing communities of rationalists thinking about and working on the problem seems like an extremely good first step. I believe such communities are much more likely to succeed when they are started with concrete projects to direct their energies towards (I think Eliezer discusses this at some point within the sequences). Telling someone to become more rational "because" seems pretty futile--telling them to become more rational because they are going to need it to pull their weight is something else entirely.
What projects should they work on? I think understanding and engaging in effective, large-scale rationality outreach is tractable and more important than almost anything else we can work on. I have described some of my thoughts recently on LW.
To motivate this sort of outreach (eventually to the targets of outreach, but more immediately to the people who you would like to convince to engage in it), at some point you need to have in mind goals which aren't just recruiting more rationalists. I think there are currently very important problems controlling the speed, quality, and responsibility of research in the immediate future, which rationalists should be working on aggressively. For example:
These efforts in turn must be justified by arguments regarding the importance of responsible and rapid research advances, which would inevitably expand to even greater length than the preceding paragraph. To someone who already takes the SIAI seriously, this part of the argument is probably comparatively easy: if you care about the future, rational research looks very different from modern research and is very important.
The upshot is: I am now thinking hard about how to create communities of people working as well as they can for the good of humanity. Justifying such an undertaking (as a significant change in someone's day to day life) is going to be quite involved, and so getting people to take the idea seriously requires at a minimum getting them to the point where they are willing to spend a very long time thinking about what exactly they should be doing. That is what this post addresses, though it is a tiny part of a large problem.
This probably is a bit late, but in a general sense Effective Altruism sounds like what you're looking for, although the main emphasis there is the "helping others as much as possible" rather than the "rationalists" part, but there's still a significant overlap in the communities. If both LW and EA are too general for you and you want something with both rationality and utilitarian altruism right in it's mission statement... I'm sure there's some blog somewhere in the ratioinalist blogosphere which is devoted to that specifically, although it might be just a single person's blog rather than a community forum.
Incidentally, if you did find- or found- a specific community along those lines I'd be interested in joining it myself.
What I want is a community of rationalists who are interested in helping others as much as possible. Does such a thing already exist, ready-made? Either as a subset of LW or independent of it?
It depends where you live. If you're in NY, you're in luck. I hear a lot about London and Cambridge too, but I know less about them.
Great article! Having all the best arguments in one place will make it far easier to remember to/think of how to persuade people of things like this. I'm currently printing this out to carry around everywhere. Actually . . . speaking of printing things out, this seems like it would be great in pamphlet form to hand out to people, though obviously the tone would have to be shifted and explanations would have to be written. What do other people think of that?
Just want to mention @ #8: After a year and a half of reading LW and the like I still haven't accomplished this one. Admittedly this is more like a willpower/challenge thing (similar to a "rationality technique") than just an idea I dispute, and there might be cases where simply convincing someone to agree that that's important would get them past the point of what you term "philosophical garbage" where they go "huh, that's interesting", but still hard.
Granted I should mention that I at least hope that LW stuff will affect how I act once I graduate college, get a job and start earning money beyond what I need to survive. I was already convinced that I ought to donate as much as possible to various causes, but LW has probably affect which causes I'll choose.
I have been thinking recently about how to get other people to do the things I want them to do, because it seems like overwhelmingly the best way to get those things done.
I typically encounter two difficulties off the bat:
One interesting observation is that people typically reject either one or the other. Some people agree that if they wanted to help other people, they should do rather extreme and unusual things, but claim to not care about other people. Some people agree that helping other people is very important, but think that typical philanthropic activity is reasonably effective (so effective that its not worth spending much time to optimize further). I think this is a consistent quirk of human nature: people encounter a conclusion they don't like and a bunch of premises that seem reasonable, and they choose one premise to reject while they can be manipulated into accepting the others. This is probably a useful thing to make a mental note of and try to exploit productively.
That observation aside, I think the easiest plan is to talk to smart people who already care deeply about other humans, but don't think too hard about how to act strategically on that preference. One approach is to present an idea very concisely which strongly suggests that 5 minutes of thought about strategic behavior is warranted. With the 5 minutes, maybe an idea can be communicated which implies that an hour of thought about strategic behavior is warranted. With an hour, maybe more progress can be made. And so on.
I would like to brainstorm ideas which have a good ratio of "Amount of re-evaluation of priorities / strategies they inspire in someone who is unconsciously trying very hard to avoid changing their behavior" / "Amount of time required to communicate to someone who is attentive but unconsciously trying very hard to avoid changing their behavior." Perhaps more important are thoughts about how to present these ideas efficiently. A lot of what I am about to say is somewhat redundant with other content at LW: try and consider it particularly in the context of strong time limitations and trying to talk to someone who is not fundamentally interested in becoming rational.
1. Rational philanthropy. Present two or more activities many people engage in (donating to different charitable organizations, donating vs. volunteering, solving the same social problem in different ways), together with a concise and maximally incontrovertible argument that one of those activities is significantly better than the other, and that anyone who does the other is wasting their energy. Suggest that the collective behavior of society is a horrible marker for what is effective. Suggest that to the extent that socially typical activities are optimally useful, it is coincidental. Suggest that if some people do very little good in the world because they don't think enough, we may also do very little good (compared to our potential) because we don't think enough.
2. Value of technological progress. Present a plausible deductive argument that either (A) the value of increasing the speed of technological progress is immensely higher than the value of doing traditional philanthropic work or (B) the value of controlling the direction of technological progress is immensely higher than the value of doing traditional philanthropic work. Suggest that, regardless of how their careful consideration of the argument turns out, it could cause a complete change in priorities. Suggest further that, if the argument fails and the listener doesn't have to change their behavior, its not because the listener has considered all the possibilities at length and chosen the best one. Indeed, the majority of people engaged in philanthropic work haven't; to the extent that their behavior is maximally effective it is coincidental. The potential difference between "maximally effective" and the default is incredibly large.
5/2. Any other extremely important considerations which you can plausibly argue might change someone's priorities completely. That was the best one I could think of, but having more seems good. Even if the listener ultimately rejects the argument, if you can maintain even for a couple of minutes the possibility that this idea--which they haven't even given a minute's thought--could change their views, then you might be able to get some leverage out of it.
3. Scope insensitivity. Point out that scope insensitivity exists and can do ridiculous things to people's judgments, using the most quickly convincing empirical evidence available; so far the best I have heard (probably from EY, though I don't know) is a study about saving wildlife, whose details I should surely know by heart if there is no more effective study (and whose methodological soundness I should briefly confirm before using it in such an argument). Point out that a billion people is a lot, and its unlikely that human intuition ever comes up with the right answer on questions that concern a billion people (much less larger numbers).
4. Society is generally insane. Offer any evidence that suggests that, especially when a market isn't filtering out the bad ideas, social consensus can fail catastrophically. Suggest that any beliefs derived from social consensus should be questioned, and that even very good ideas don't actually propagate through society that quickly when they don't help people make money.
These arguments are all more towards the quick end of the spectrum, because thats what I think about most right now. The hope is that if you can win a little more actual introspection from a listener, you can make more progress. The other stages are better covered by existing material on LW, but I think it is worth looking at exactly what you would do with, say, thirty minutes of introspection. I feel like I have less to contribute here, so I will just mention some things briefly.
5. Examples of common biases. This is a harder argument to make than LW collectively seems to think. It is not obvious to me that my hindsight bias is a big deal--you need to really convince me that the quality of my reasoning is important and that a particular bias affects it in a significant way.
6. Evidence that people rarely change their mind. This seems like the most important one, but it again is a fairly hard argument. You need to actually convince the listener that they personally should be changing their mind more often than they do, and that they are suffering significantly (or are significantly less effective at helping others) as a result.
7. The importance of thinking about the value of future humans. This is also difficult, because discussions along these lines seem invariably to get pulled into the category of "philosophical garbage" (about which people are happy to talk at great length without thinking very much or updating any beliefs) rather than discussions about what to actually do with yourself tomorrow. But thinking explicitly about the value of the future is not too far fetched for most people to stomach, and realizing how important this question is (whatever the answer turns out to be) may suggest how important thinking about other previously unconsidered things may be.
These ideas are towards the middle of the range. I expect if carefully constructed you could get mileage out of them in the course of a single conversation (starting from some easier topics). Here are what I imagine as the last and hardest things to sell, though I can't say anything about how to do it.
8. Let your thoughts about what you should do control what you actually do. Seriously, when you walk away from this, let it have some effect on your behavior.
9. You have to make decisions in the face of incredible uncertainty. You don't get to abstain from having beliefs just because thinking about the future is hard. Learn to think precisely about uncertainty, and then do it. Don't reject a belief because it is ridiculous.