Sometimes, new technical developments in the discourse around effective altruism can be difficult to understand if you're not already aware of the underlying principles involved. I'm going to try to explain the connection between one such new development and an important underlying claim. In particular, I'm going to explain the connection between donor lotteries (as recently implemented by Carl Shulman in cooperation with Paul Christiano)1 and returns to scale. (This year I’m making a $100 contribution to this donor lottery, largely for symbolic purposes to support the concept.)

I'm not sure I'm adding much to Carl's original post on making bets to take advantage of returns to scale with this explainer. Please let me know whether you think this added anything or not.

What is a donor lottery?

Imagine ten people each have $1,000 to give to charity this year. They pool their money, and draw one of their names out of a hat. The winner gets to decide how to give away all $10,000. This is an example of a donor lottery.

More generally, a donor lottery is an arrangement where a group of people pool their money and pick one person to give it away. This selection is randomized so that each person has a probability of being selected proportional to their initial contribution.

Selfish reasons to gamble

Let's start with the case of a non-charitable expenditure. Usually, for consumption decisions, we have what economists call diminishing marginal utility. This is because we have limited ability to consume things, and also because we make the best purchases first.

Food is an example of something we have limited appetite for. After a certain point, we just aren't hungry anymore. But we also but the more important things first. Your first couple dollars a day make the difference between going hungry and having enough food. Your next couple dollars a day go to buying convenience or substituting higher-quality-foods, which is a material improvement, but nowhere near as big as the difference between starving and fed.

To take a case that's less universal, but maybe easier to understand the principle in, let's say I'm outfitting a kitchen, and own no knives. I can buy one of two knives – a small knife or a large one. The large knife can do a good job cutting large things, and a bad job cutting small things. The small knife can do a good job cutting small things, and a bad job cutting large things. If I buy one of these knives, I get the benefit of being able to cut things at all for both large and small items, plus the benefit of being able to cut things well in one category. If I buy the second knife, I only improve the situation by the difference between being able to cut things poorly in one category, and being able to cut them well. This is a smaller difference. I'd rather have one knife with certainty, than a 50% chance of getting both.

But sometimes, returns to consumption are increasing. Let's say that I have a spare thousand dollars after meeting all my needs, and there's only one more thing in the world I want that money can buy – a brand-new $100,000 sports car, unique enough that there are no reasonable substitutes. The $1,000 does me no good at all, $99,000 would do me no good at all, but as soon as I have $100,000, I can buy that car.

One thing I might want to do in this situation is gamble. If I can go to a casino and make a bet that has a 1% chance of multiplying my money a hundredfold (ignoring the house's cut for simplicity), then this is a good deal. Here's why. In the situation where I don't make the bet, I have a 100% chance of getting no value from the money. In the situation where I do make the bet, I have a 99% chance of losing the money, which I don't mind since I had no use for it anyway, but a 1% chance of being able to afford that sports car.

But since in practice the house does take a cut at casinos, and winnings are taxed, I might get a better deal by pooling my money together with 100 other like-minded people, and selecting one person at random to get the car. This way, 99% of us are no worse off, and one person gets a car.

The sports car scenario may seem far-fetched, especially once you take into account the prospect of saving up for things, or unexpected expenses. But it's not too far from the principle behind the susu, or ROSCA:

Susus are generally made up of groups of family members or friends, each of whom pledges to put a certain amount of money into a central pot each week. That pot, presided over by a treasurer, whose honesty is vouched for by his or her respected standing among the participants, is then given to one member of the group.
Over the course of a susu's life, each member will receive a payout exactly equal to the total he has put in, which could range from a handful of dollar bills to several thousand dollars; members earn no interest on the money they set aside. After a complete cycle, the members either regroup and start over or go their separate ways.

In communities where people either don't have access to savings or don't have the self-control to avoid spending down their savings on short-run emergencies, the susu is the opposite of consumption smoothing - it enables participants to bunch their spending together to make important long-run investments.2

A susu bears a strong resemblance to a partially randomized version of a donor lottery, for private gain.

Gambling for the greater good

Similarly, if you’re trying to do the most good with your money, you might want to take into account returns to scale. As in the case of consumption, the "normal" case is diminishing returns to scale, because you're going to want to fund the best things you know of first. But you might think that the returns to scale are increasing in one of two ways:

  • Diminishing marginal costs
  • Increasing marginal returns

Diminishing marginal costs

Let’s say that your charity budget for this year is $5,000, and your best guess is that it will take about five hours of research to make a satisfactory giving decision. You expect that you’ll be giving to charities for which $5,000 is a small amount, so that they have roughly constant returns to scale with respect to your donation. (This matters because what we care about are benefits, not costs.) In particular, for the sake of simplicity, let’s say that you think that the best charity you’re likely to find can add a healthy year to someone’s life for $250, so your donation can buy 20 life-years.

Under these circumstances, suppose that someone you trust offers you a bet with a 90% probability of getting nothing, and a 10% probability of getting back ten times what you put in. In this case, if you make a $5,000 bet, your expected giving is 10% * 10 * $5,000 = $5,000, the same as before. And if you expect the same impact per dollar up to $50,000, then if you win, your donation saves $50,000 / $250 = 200 life-years for beneficiaries of this charity. Since you only have a 10% chance of winning, your expected impact is 20 life-years, same as before.

But you only need to spend time evaluating charities if you win, so your expected time expenditure is 10% * 5 = 0.5 hours. This is strictly better – you have the same expected impact, for a tenth the expected research time.

These numbers are made up and in practice you don’t know what the impact of your time will be, but the point is that if you’re constrained by time to evaluate donations, you can get a better deal through lotteries.

Increasing marginal benefits

The smooth case

Of course, if you’re giving away $50,000, you might be motivated to spend more than five hours on this. Let’s say that you think that you can find a charity that’s 10% more effective if you spend ten hours on it. Then in the winning scenario, you’re spending an extra five hours to save an extra 20 lives, not a bad deal. Your expected lives saved is then 22, higher than in the original case, and your expected time allocation is 1 hour, still much less than before.

The lumpy case

Let’s say that you know someone considering launching a new program, which you believe would be a better value per dollar than anything else you can find in a reasonable amount of time. But they can only run the program if they get a substantial amount of initial funds; for half as much, they can’t do anything. They’ve tried a “kickstarter” style pledge drive, but there aren’t enough other donors interested. You have a good reason to believe that this isn’t because you’re mistaken about the program.

You’d fund the whole thing yourself, but you only have 10% of the needed funds on hand. Once again, you’d want to play the odds.

Lotteries, double-counting, and shared values

One objection I’ve seen potential participants raise against donor lotteries is that they’d feel obliged to take into account the values of other participants if they won. This objection is probably related to the prevalence of double-counting schemes to motivate people to give.

I previously wrote about ways in which "matching donation" drives only seem like they double your impact because of double-counting:

But the main problem with matching donation fundraisers is that even when they aren't lying about the matching donor's counterfactual behavior, they misrepresent the situation by overassigning credit for funds raised.
I'll illustrate this with a toy example. Let's say that a charity - call it Good Works - has two potential donors, Alice and Bob, who each have $1 to give, and don't know each other. Alice decides to double her impact by pledging to match the next $1 of donations. If this works, and someone gives because of her match offer, then she'll have caused $2 to go to Good Works. Bob sees the match offer and reasons similarly: if he gives $1, this causes another $1 to go to Good Works, so his impact is doubled - he'll have caused Good Works to receive $2.
But if Alice and Bob each assess their impact as $2 of donations, then the total assessed impact is $4 - even though Good Works only receives $2. This is what I mean when I say that credit is overassigned - if you add up the amount of funding each donor is supposed to have caused, you get number that exceeds the total amount of funds raised.

If you tried to justify donor lotteries this way, it would look like this: Let's say you and nine other people each put in $10,000. You have a 10% chance of getting to give away $100,000. But if you lose, the other nine people still want to give to something that fulfills your values at least somewhat. So you are giving away more than $10,000 in expectation. This is double-counting because if you apply it consistently to each member of the group in turn, it assigns credit for more funding than the entire group is responsible for. It only works if you think you're getting one over on the other people if you win.

For instance, maybe you'd really spend your winnings on a sports car, but giving the money to an effective charity seems better than nothing, so they're fulfilling your values, but you're not fulfilling theirs.

Naturally, some people feel bad about getting one over on people, and consequently feel some obligation to take their values into account.

There are some circumstances under which this could be reasonable. People could be pooling their donations even though they're risk-averse about charities, simply in order to economize on research time. But in the central case of donor lotteries, everyone likes the deal they're getting, even if the estimate the value of other donors' planned use of the money at zero.

The right way to evaluate the expected value of a donor lottery is to only take the deal if you'd take the same deal from a casino or financial instrument where you didn't think you were value-aligned with your counterparty. Assume, if you will, that everyone else just wants a sports car. If you do this, you won't double-count your impact by pretending that you win even if you lose.

Claim: returns to scale for individual donations

Donor lotteries were originally proposed as a response to an argument based on returns to scale:

  • Some effective altruists used “lumpy” returns to scale (for instance, where extra money matters only when it tips the balance over to hiring an additional person) to justify favoring charities that turn funds into impact more smoothly.
  • Some effective altruists say that small donors should defer to GiveWell’s recommendations because for the time it makes to spend on allocating a small donation, they shouldn’t expect to do better than GiveWell.

In his original post on making use of randomization to increase scale, Carl Shulman summarizes the case against these arguments:

In a recent blog post Will MacAskill described a donation opportunity that he thought was attractive, but less so for him personally because his donation was smaller than a critical threshold:

This expenditure is also pretty lumpy, and I don’t expect them to get all their donations from small individual donations, so it seems to me that donating 1/50th of the cost of a program manager isn’t as good as 1/50th of the value of a program manager.

When this is true, it can be better to exchange a donation for a 1/50 chance of a donation 50 times as large. One might also think that when donating $1,000,000 rather than $1 one can afford to spend more time and effort in evaluating opportunities, get more access to charities, and otherwise enjoy some of the advantages possessed by large foundations.
Insofar as one believes that there are such advantages, it doesn't make sense to be defeatist about obtaining them. In some ways resources like GiveWell and Giving What We Can are designed to let the effective altruist community mimic a large effectiveness-oriented foundation. One can give to the Gates Foundation, or substitute for Good Ventures to keep its cash reserves high.
However, it is also possible to take advantage of economies of scale by purchasing a lottery (in one form or another), a small chance of a large donation payoff. In the event the large donation case arises, then great efforts can be made to use it wisely and to exploit the economies of scale.

There's more than one reason you might choose to trust the recommendations of GiveWell or Giving What We Can, or directly give to either, or to the Gates Foundation. One consideration is that there are returns to scale for delegating your decisions to larger organizations. Insofar as this is why donors give based on GiveWell recommendations, GiveWell is serving as a sort of nonrandomized donor lottery in which the GiveWell founders declared themselves the winners in advance. The benefit of this structure is that it's available. The obvious disadvantage is that it's hard to verify shared values.

Of course, there are other good reasons why you might give based on GiveWell's recommendation. For instance, you might especially trust their judgment based on their track record. The proposal of donor lotteries is interesting because it separates out the returns to scale consideration, so it can be dealt with on its own, instead of being conflated with other things.

Even if your current best guess is that you should trust the recommendations of a larger donor, if you are uncertain about this, and expect that spending time thinking it through would help make your decision better, then a donor lottery allows you to allocate that research time more efficiently, and make better delegation decisions. There's nothing stopping you from giving to a larger organization if you win, and decide that's the best thing. So, the implications of a position on returns to scale are:

  • If you think that there are increasing returns to scale for the amount of money you have to allocate, then you should be interested in giving money to larger donors who share your values, or giving based on their recommendations. But you should be even more interested in participating in a donor lottery.
  • If you think that there are diminishing returns to scale for the amount of money you have to move, then you should not be interested in giving money to larger donors, participating in a donor lottery, accepting money from smaller donors, or making recommendations for smaller donors to follow.

With those implications in mind, here are some claims it might be good to argue about:

(Cross-posted to my personal blog and Arbital.)


1 This phrasing was suggested by Paul. Here's how Carl describes their roles: "I came up with the idea and basic method, then asked Paul if he would provide a donor lottery facility. He did so, and has been taking in entrants and solving logistical issues as they come up."

2More on susus here and here. More on ROSCAs here, here, here, and here.

When I was trying to find where I'd originally heard about these and didn't remember what they were called, I Googled for poor people in developing countries using lotteries as savings, but most relevant-looking results were about using lotteries to trick poor people into saving. Almost none were about what poor people were already doing to solve their actually existing problems. It turns out, sometimes the poor can do financial engineering when they need to. The global poor aren't necessarily poor because they're stupid or helpless. Seems pretty plausible that in many cases, they're poor because they haven't got enough money.

New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 3:41 AM

as recently implemented by Paul Christiano in cooperation with Carl Shulman

(implemented by Carl Shulman in cooperation with Paul Christiano)

In general, I think that "random dictator" may often be a better governance system than a committee or a democracy (except where there are diminishing returns and limited ability to negotiate from behind a veil of ignorance).

I think that "thinking more" is by far the most important source of returns to scale in the $1,000 - $100,000 range.

Thanks for the correction, I'll fix the wording. Seems like Carl should make that clearer in his post too. I took the post to be saying that you'd handled the implementation and Carl had done the writeup.

I came up with the idea and basic method, then asked Paul if he would provide a donor lottery facility. He did so, and has been taking in entrants and solving logistical issues as they come up.

I agree that thinking/researching/discussing more dominates the gains in the $1-100k range.

Is giving small amounts of money away really something that individuals should spend a lot of time thinking about - like many days of research?

Is picking 9 other people whose competence you trust and delegating the decision to a randomly chosen one of them much easier than just doing whatever research you wanted to do, and then sharing your results? Are givewell doing duch a bad job at making recommendations that you have to improvise this and do their job for them?

I think we have an underassignment of credit problem here. You can't both be the junior partner in this. :P

Thanks for describing the details a bit more.

I too agree that the gains mainly come from more/better evaluation.

I think practical interest in these things is somewhat bizarre.

All of the people that would be interested in participating are already effective altruists. That means that as a hobby they are already spending tons of time theorizing on what donations they would make to be more efficient. Is the value of information from additional research really sufficient to make it worthwhile in this context? Keep in mind that much of the low-hanging analysis from a bog-standard EA's perspective has already been performed by GiveWell, and you can't really expect to meaningfully improve on their estimates. This limits the pool of rational participants to only those who know they have values that don't align with the community at large.

For me, the whole proposition is a net negative. If I don't get selected, then someone else chooses what to do with my money. Since they don't align with my values, they might donate it to the KKK or whatever. If I DO get selected, it's arguably worse, because now I have to do a bunch of research that has low value to me to make a decision. Winning the lottery to spend $100,000 of other people's money doesn't suddenly endow me with tens or hundreds of hours to use for extra research (unless I can spend some of the money on my research efforts...).

The complexity of the the system, its administration, and time spent thinking about whether to participate is all deadweight loss in the overall system. Someone, or many someones, have to spend time considering whether to participate, manage the actual money and the logistics of it. This is all conceptual overhead for the scheme.

Not to get too psychoanalytical or whatever, but I think this stems partly from the interest of people in the community to appreciate complex, clever, unusual solutions BECAUSE they are complex, clever and unusual. My engagement with effective altruism is very boring. I read the GiveWell blog and occasionally give them money. It's not my hobby, so I don't participate in things like the EA forum.

If you are considering participating, first figure out what actual research you would do if you won the award, what the VoI is for that time, and how you would feel if you either had to do that research, or had someone else choose the least efficient plausible alternative is for your values. Think about whether the cleverness and complexity of this system is actually buying you anything. If you like being contrarian and signalling your desire to participate in schemes that show you Take Utility Seriously, by all means, go for it.

I was reluctant to reply to this because it seemed like a comment on the general concept of donor lotteries, but not a comment on the actual post, which specifically responds to several points made in this comment. But one of my housemates mentioned that they felt the need to reply - so hopefully if I write this people will at least see that the main claims here have been addressed, and not spend their own time on this.

This is a pretty bold claim:

Keep in mind that much of the low-hanging analysis from a bog-standard EA's perspective has already been performed by GiveWell, and you can't really expect to meaningfully improve on their estimates.

It's only relevant if you're so confident in it that you don't feel the need to do any double-checking - that the right amount of research to do is zero or nearly zero. I find it pretty implausible that a strategy that involves negligible research time manages to avoid simply having money extracted from it by whoever's best at marketing. If GiveWell donors largely aren't checking whether GiveWell's recommendations are reasonable, this is good reason to suspect that GiveWell's donors aren't buying what they mean to buy.

Once you're spending even a modest amount of time doing research, something like a donor lottery should be appealing for small amounts. As I wrote in the post, the lottery can be net-positive even with no additional research because you simply save whatever time you'd have spent on research, if you don't win the lottery. For more on this, you might try reading the section titled "Diminishing marginal costs".

The objection of value misalignment with other donors ("might donate it to the KKK") should already be priced in if you're not trying to double-count impact. The point of a donor lottery is to buy variance, if you think there are returns to scale for your giving. Coordinating between multiple donors just saves on transaction costs. For more on this, you might try reading the section titled "Lotteries, double-counting, and shared values".

If you don't care about the impact of your charitable giving, such that research that improves its impact doesn't seem to further your interests ("a bunch of research that has low value to me to make a decision"), then I'm pretty confused about why you think you're anything like the target market for this.

I'm not OP, but I have similar feelings about GiveWell. They have 19 full-time employees (at least 8 which are researchers). I am one person with a full-time non-research non-charity job. Assume I spend 40 hours on this if I win (around a month of free time). Running the numbers, I expect GiveWell to be able to spend at least 400x more time on this, and I expect their work to be far more productive because they wouldn't be running themselves ragged with (effectively) two jobs, and the average GiveWell researcher already has more than a year of experience doing this and the connections that it comes with.

Regarding the target audience, I feel like the kinds of people who would enjoy doing this should either apply for a job at GiveWell, or start a new charity evaluator. If you think you can do better than they can, why rely a lottery victory to prove it?

I agree that GiveWell does high-quality research and identifies effective giving opportunities, and that donors can do reasonably well by deferring to their recommendations. I think it is not at all crazy to suspect that you can do better, and I do not personally give to GiveWell recommended charities. Note for example that Holden also does not donate exclusively to GiveWell charities, and indeed is generally supportive of using either lotteries or delegation to trusted individuals.

  1. GiveWell does not purport to solve the general problem of "where should EA's give money." They purport to evaluate one kind of intervention: "programs that have been studied rigorously and ideally repeatedly, and whose benefits we can reasonably expect to generalize to large populations, though there are limits to the generalizability of any study results. The set of programs fitting this description is relatively limited, and mostly found in the category of health interventions" (here)

  2. The situation isn't "you think for X hours, and the more hours you think the better the opportunities you can find, which you can then spend arbitrarily large amounts of money on." You need to do thinking in order to identify opportunities to do good, which can accept a certain amount of money. In order to have identify a better donation opportunity than GiveWell, one does not have to do more work than GiveWell / delegate to someone who has done more work.

  3. By thinking longer, you could identify a different delegation strategy, rather than finding an object level recommendation. You aren't improving on GiveWell's research, just on your current view that GiveWell is the right person to defer to. There are many people who have spent much longer than you thinking about where to give, and at a minimum you are picking one of them. Having large piles of money and being thoughtful about where to give it is the kind of situation that (for example) makes it possible for GiveWell to get started, and it seems somewhat perverse to celebrate GiveWell while placing no value on the conditions that allow it to come to exist.

  4. In a normal world, the easiest recommendations to notice/verify/follow would receive the most attention, and so all else equal you might get higher returns by looking for recommendations that are harder.

  5. If you think GiveWell recommended charities are the best intervention, then you should be pretty much risk neutral over the scale of $100k or even $1M. So the cost is relatively low (perhaps mostly my 0.5% haircut) and you would have to be pretty confident in your view (despite the fact that many thoughtful people disagree) in order to make it worthless.

  6. The point of lotteries is not to have fun or prove that we are clever, it is to use money well.

I think my answer to all of this is: that sounds great but wouldn't it be better if it wasn't random?

If you have the skills and interest to do charity evaluation, why wait to win the lottery when you could join or start a charity evaluator? If you need money, running a fundraiser seems better than hoping to win the lottery.

If you think you're likely to find a better meta charity than GiveWell, it seems better to just do that research now and write a blog post to make other people aware your results, rather than the more convoluted method of writing blog posts to convince people to join a lottery and then hoping to win.

And if you aren't very interested in charity research, why join a donor lottery that picks the decider at random when you could join one where it's always the most competent member (100% of the time, GiveWell gets to decide how to allocate the donation)?

I think my answer to all of this is: that sounds great but wouldn't it be better if it wasn't random?

Why would that be better?

If you think you're likely to find a better meta charity than GiveWell, it seems better to just do that research now and write a blog post to make other people aware your results

I think you are radically, radically underestimating the difficulty of reaching consensus on challenging questions.

For example: a significant fraction of openphil staff make significant contributions to charities other than GiveWell recommendations, and that in many cases they haven't reached consensus with each other; some give to farm animal welfare, some to science, some to political causes, etc.; even within causes there is significant disagreement. This is despite the fact that they spend most of their time thinking about philanthropy (though not about their personal giving).

why join a donor lottery that picks the decider at random when you could join one where it's always the most competent member

If you will certainly follow GiveWell recommendations after winning, then gambling makes no difference and isn't worth the effort (though hopefully it will eventually take nearly 0 effort, so it's really a wash). If you think that GiveWell is the most competent decider, yet somehow don't think that you will follow their recommendations, then I'm not sure what to say to you. If you are concerned about other people making bad decisions with their money, well that's not really your problem and it's orthogonal to whether they gamble with it.

If GiveWell donors largely aren't checking whether GiveWell's recommendations are reasonable, this is good reason to suspect that GiveWell's donors aren't buying what they mean to buy.

One assumes that having a few people do sanity checks on randomly selected pieces of their work is good enough, plus one assumes that Givewell isn't capable of a stealthy transformation into an evil organisation overnight without anyone on the inside raising the alarm.

Put another way, by doing this donor lottery thing, you giving Givewell a 1/10 rating already. It would be like my implicit 1/10 rating of the local supermarkets if I start growing my own food. Charity recommendations is their job! It's what they specialised in! If you have to spend a lot of time DIYing it, then they suck!

It's only relevant if you're so confident in it that you don't feel the need to do any double-checking - that the right amount of research to do is zero or nearly zero.

My contention is that the people who are willing to participate in this have already done non-negligible amounts of thinking on this topic, because they are EA hobbyists. How could one be engaging with the EA community if they are not spending time thinking about the core issues at hand? Because of diminishing marginal returns, they are already paying the costs for the research that has the highest marginal value, in terms of their engagement with the community and reflection on these topics. I do not believe this is addressed in the original article. I believe this is our fundamental disagreement.

The objection of value misalignment can't be priced in because there is no pricing mechanism at play here, so I'm not sure what you mean (except for paulfchristiano's fee for administering the fund). That exact point was not the main thrust of the paragraph, however. The main thrust of that paragraph was to explain the two possible outcomes in the lottery, and explain how both lead to potential negative outcomes in light of the diminishing marginal returns to original research and the availability of a person's time in light of outside circumstances.

I am in the target market in the sense that I donate to EA charities, and I think that SOMEONE doing research improves its impact, but I guess I am not in the target market in the sense that I think that person has to be me.

Regarding your snips about my not reading the article, it's true that if I had more time and more interest in this topic, I would offer better quality engagement with your ideas, so I apologize that I lack those things.

By "priced in," I meant something like - you shouldn't be counting the benefits from the cases where you lose anyway, otherwise you end up effectively double-counting contributions.

On trusting GiveWell:

Apple knows much, much more about what makes a smartphone good than I do. They've put huge amounts of research into it. Therefore I shouldn't try to build my own smartphone (because I expect there are genuinely huge returns to scale). This doesn't mean that I should defer to Apple's judgment about whether I should buy a smartphone, or which one to buy.

Samsung's also put much, much more work than I have into what the optimal arrangement of a smartphone is. That doesn't help me decide whether to buy an iPhone or a Samsung.

McDonald's has put similarly huge amounts of expert work into figuring out how to optimally produce hamburgers than I have, but I still expect that I can easily produce a much higher-quality product in my own home, so it's not even always the case that some types of returns to scale mean one can't compete on small batches.

Do you think GiveWell's substantially different?

Givewell is different than those examples certainly. Your examples all include a clear motive to convince people to use their product, even if there are better out there. Givewell are analysts, not producers of good, and are explicitly trying to guide people to the best choice (within a set of constraints).

A better example would be choosing a restaurant. Michelin and Yelp have far more data and have put far more work into evaluating and rating food providers than you ever can. But you still need to figure out how your preferences fit into their evaluation framework, and navigate the always-changing landscape to make an actual choice.

(note that the conclusion is the same: you still must expend some search cost

Givewell are analysts, not producers of good, and are explicitly trying to guide people to the best choice (within a set of constraints).

A lot of the mission of Givewell is also EA movement building. By advocating the standard that evidence is important existing charities will focus on on finding evidence for their claims.

I don't think "incentive" cuts at the joints here, but selection pressure does. You're going to hear about the best self-promoters targeting you, which is only an indicator of qualities you care about to the extent that those qualities contributes to self-promotion in that market.

Personal experience: I occasionally use Yelp, but in some cases it's worse than useless because I care about a pretty high standard of food quality, and often Yelp restaurant reviews are about whether the waiter was nice, the restaurant seemed fancy, the portions were big, sometimes people mark restaurants down for having inventive & therefore challenging food, etc. So I often get better information from the Chowhound message board, which no one except foodies has heard of.

As a counterpoint, I intended to contribute to the donation lottery (couldn't arrange tax deductibility outside the US), and think it would be a good thing if most EAs participated in donation lotteries.

All of the people that would be interested in participating are already effective altruists. That means that as a hobby they are already spending tons of time theorizing on what donations they would make to be more efficient. Is the value of information from additional research really sufficient to make it worthwhile in this context? Keep in mind that much of the low-hanging analysis from a bog-standard EA's perspective has already been performed by GiveWell, and you can't really expect to meaningfully improve on their estimates.

As Benquo notes, "GiveWell does not purport to solve the general problem of 'where should EA's give money.'". Personally, I believe that existential risk interventions are the best donations, so there is no equivalent to GiveWell for me to defer to. If I won the lottery, I imagine it would be worth my time engaging thoroughly with organisations fundraising documents, refining my world-model on how to reduce existential risk, and reaching out to those likely to have better knowledge than myself. I'm not already spending "tons of time doing this" - I work full-time, and in particular don't have the cognitive space to do high-quality thinking on this in the pockets of time I have available.

At a community-level, it does seem that most EA's have thought insufficiently about cause prioritization. Challenging one's beliefs isn't easy though, so I'm hopeful that a donor lottery can provide a mechanism for someone to say "I recognise that there's some worthwhile reflection and research I haven't done, and I don't have the motivation to do it when the stakes are lower, but will do so on the off-chance I win the lottery."

Winning the lottery to spend $100,000 of other people's money doesn't suddenly endow me with tens or hundreds of hours to use for extra research (unless I can spend some of the money on my research efforts...).

If I won the lottery, I imagine I'd take a few weeks consecutive leave from work to research.

Keep in mind that much of the low-hanging analysis from a bog-standard EA's perspective has already been performed by GiveWell, and you can't really expect to meaningfully improve on their estimates.

Can you say more about why you think the typical EA should think this?

[This comment is no longer endorsed by its author]Reply