Comment author: joaolkf 30 October 2014 04:31:15PM *  2 points [-]

Cambridge's Endowment: £4.9 billion

Oxford's Endowment: £4.03 billion

Comment author: Benjamin_Todd 06 June 2015 09:19:20PM 1 point [-]

You need to add in the endowments of the colleges as well. The richest college at Cambridge (Trinity) has an endowment of about $1.5bn; whereas the richest college at Oxford has only about $300m.

Comment author: JonahSinick 17 March 2014 02:38:53AM *  1 point [-]

I should add that my figures don't property take into account the ages at which people accumulate the bulk of their net worth. For "number of graduates from top universities" I considered graduates between ages of 22 and 82, but this isn't appropriate, since most people won't acquire most of their wealth until later in life (e.g. the average age of a billionaire is 66), so the prevalence of eventual $30m+ people should be something like 2x what I said, and similarly, for members of the general population (where I included everyone, including children) the prevalence should be something like 3x what I said.

So the actual prevalences of eventual $30+ millionaires should be something like 1 in 1,000 (general population) and 1 in 100 (graduates of top universities).

As for billionaires, there are 442 US billionaires, maybe 100 million in the appropriate age range, so 1 in 250,000 of the general population becomes a billionaire, 129 billionaires at the schools that I looked at with ~250,000 in the appropriate age range, cutting by a factor of 2 to account for possible double counting (undergraduate/graduate), we get a frequency of 1 in 4000 eventually becoming billionaires. (About 0.4 billionaires per university class.)

This analysis is quite crude – one would need the actual age distribution and not just the average age.

Comment author: Benjamin_Todd 06 June 2015 09:17:20PM 0 points [-]

What are the chances of being a billionaire or getting $30m plus if you go to Harvard rather than an elite uni?

And then what about HBS rather than Harvard?

Comment author: gjm 13 February 2014 09:46:52AM 0 points [-]

They're probably interested in comparing their salaries with those of other very-well-paid people. If Glassdoor currently doesn't attract interest from many of those, that probably means it will continue not to.

Comment author: Benjamin_Todd 13 February 2014 12:19:29PM 0 points [-]

Agree - Glassdoor is mainly designed to appeal to job seekers. The way they get their data is by only granting access if you reveal your salary. So the salary data ends up tilted towards the people who are seeking jobs.

There's also a sampling problem. Google has ~10,000 engineers, but there's probably only ~100 who earn $1mn+. Large companies normally only have a couple of responses, so even if you sampled everyone randomly, you'd only get ~1 top earner in the sample.

Comment author: JonahSinick 11 February 2014 05:54:18PM *  5 points [-]

I have also been critical of earning to give as optimal for contributing social value: last June I wrote a blog post titled Earning to Give vs. Altruistic Career Choice Revisited where I wrote

Over the past three years, I myself have shifted from the position that “earning to give” is philanthropically optimal, to the position that it’s generally the case that one can do more good by choosing a career with high direct social value than by choosing a lucrative career with a view toward donating as much as possible.

Since writing the blog post, I've updated somewhat in favor of earning to give. I don’t necessarily buy the analysis that I give below (I think it probably leaves out major relevant factors), but I think that it raises the possibility that earning to give is optimal for the typical person who's capable of making $500k+/year in finance.

There seems to be a great deal of room for more funding for cash transfers. Sub-saharan Africa has about 1 billion people and a GNI per capita of $1,351. (Note that the median income per capita will be smaller than this – I've seen estimates of $600.) About $300 billion/year is spent on philanthropy in the US, so even if all philanthropic spending were on cash transfers, there would still be many people of very low income who would benefit substantially from small amounts of cash. One can question whether scaling cash transfers dramatically is logistically feasible (GiveDirectly's model is based on a cell phone payment system that is not present throughout the developing world), but there seems to be a substantial possibility that one wouldn't run out of room for more funding.

Somebody who's making $500k/year in finance and donating $250k/year to GiveDirectly can double the consumption of 225 Kenyan families. (If huge amounts of money were being put into cash transfers, the cost-effectiveness would be maybe 2x worse.)

What about direct work? Carl Shulman points out that a boost to global GDP of $n would raise logarithms of income by 30x less than an $n GiveDirectly cash transfer, so that one would have to contribute $7.5 million to GDP per year to have an impact similar to that above.

Somebody who we know in common estimates that machine learning contributes $1 million to GDP per year per researcher. With this assumption, somebody who's 10x better than the mean (not median) machine learning researcher beats out the finance worker. But somebody who's 10x better than the mean machine learning researcher may be able to earn substantially more than $500k/year in finance in expectation.

Now consider entrepreneurship. Assuming that a tech startup founder contributes 1/5th of the value of the startup, and spends 5 years working on it, and that the increase to GDP is proportional to earnings, the startup valuation would have to be ~$200 million for the founder to beat out the worker in finance. Some startups contribute more to GDP than they internalize as profit, but probably not by a factor of 10x, so that creating a startup of valuation < $20 million probably doesn't boost GDP by enough to beat out making $500k/year in finance via earning to give. Startups of valuation $20 million or higher generally get venture capital funding, and it's been said that only 1 in 400 new businesses get venture capital funding. (Of course, it's equally true that very few people make $500k/year in finance.)

Carl Shulman gives another argument in favor of earning to give.

When GiveWell or Giving What We Can change their recommendations based on new data or arguments and explain their reasoning, the donations switch rapidly and en masse. EA donations have very little inertia.

Building an organization in a specific field, accumulating field-specific human capital (experience, CV, education), these involve putting years of effort into a particular project or vision. If you later find out that cancer biology was a bad move and you think that renewable energy is more important, your years doing a PhD in that area are now substantially wasted. Careers have very high inertia and investment in cause-specific capital, while earning power is flexible and donations can be highly responsive to new inputs."

Comment author: Benjamin_Todd 13 February 2014 02:03:23AM 2 points [-]

Hi Jonah,

Great posts.

I agree these figures show it's plausible that the value of donations in finance are significantly larger than the direct economic contribution of many jobs, though I see it as highly uncertain. When you're working in highly socially valuable sectors like research or some entrepreneurship, it seems to me that the two are roughly comparable, with big error bars.

However, I don't think this shows it's plausible that earning to give is likely to be the path towards doing the most good. There are many careers that seem to offer influence over budgets significantly larger than what you could expect to donate. For instance, the average budget per employee at DfiD is about $6mn per year, and you get similar figures at the World Bank, and many major foundations. It seems possible to move this money into something similarly effective or better than cash transfers. We've also just done an estimate of party politics showing that the expected budget influenced towards your preferred causes is 1-80mn if you're an Oxford graduate over a career, and that takes account of chances of success.

You'd expect there to be less competition to influence the budgets of foundations for the better than to earn money, so these figures make sense.

(And then there's all the meta things, like persuading people to do earning to give :) )

One point to note with Carl's 30x figure - that's only when comparing the short-run welfare impact of a GDP boost with a transfer to GiveDirectly. If you also care about the long-run effects, then it becomes much less clear.

Comment author: John_Maxwell_IV 10 February 2014 06:34:33AM 7 points [-]

Glassdoor has Goldman Sachs salaries topping out at around $350K for VP-level people, although that doesn't seem to include bonuses, which seem to be about a 1.6 multiplier on your salary at the VP level (and are also taxed a lot more heavily, I hear).

I have no idea why the Glassdoor data is so much more pessimistic than your other citations. Off-topic, but I heard from a Google employee that $500K-ish salaries are within reach for driven managers at Google... and the senior product manager salary data from Glassdoor looks similar to the data for GS VPs.

Comment author: Benjamin_Todd 13 February 2014 12:50:45AM *  1 point [-]

Glassdoor rarely properly includes the top paid employees (those people don't fill out the survey). According to Goldman's own figures, mean compensation per employee (across all employees) is ~$400k. It'll be significantly higher if you're in front office. Your expected earnings from a Goldman job are roughly the mean earnings multiplied by the expected number of years you'll stay at the firm.

Comment author: joaolkf 30 December 2013 09:18:12PM *  0 points [-]

I don't have the numbers of the top of my head, but the bulk of the consultations in my list are due to Nick. I believe there are even much more done by him previous to FHI even existing back in the 90s. Nonetheless, I would guess he is probably very much willing to transfer the advocacy to CEA and similar organizations, as it seems to be already happening. In my opinion, that isn't FHI main role at all, even though they been doing it a lot. As a wild guess, I would be inclined to say he probably actively rejects a few consultations by now. As I said, we need research. Influence over the government is useless - and perhaps harmful - without it.

While they work together, I'm not sure advocacy and influence over the government are quite the same. I think advocacy here might just be seen as close to advertising and movement building, which in turn will create political pressure. Quite another thing is to be asked by the government to offer ones opinion.

Comment author: Benjamin_Todd 30 December 2013 10:31:14PM 1 point [-]

I think both research and advocacy (both to governments and among individuals) are highly important, and it's very unclear which is more important at the margin.

It's too simple to say basic research is more important, because advocacy could lead to hugely increased funding for basic research.

Comment author: Benjamin_Todd 30 December 2013 09:07:26PM 3 points [-]

We've collated a list of all the approaches that seem to be on the table in the effective altruism community for improving the long-run future. There's some other options, including funding GiveWell and GCRI. This doc also explains a little more of the reasoning behind the approaches. If you like more detail on how 80k might help reduce the risk of extinction, drop me an email at ben@80000hours.org.

In general, I think the question of how best to improve the long-run future is highly uncertain, but has high value of information, so the most important activities are: (i) more prioritisation research (ii) building flexible capacity which can act on whatever turns out to be best in the future.

MIRI, FHI, GW, 80k, CEA, CFAR, GCRI all aim to further these causes, and are mutually supporting, so are particularly hard to disentangle. My guess is that if you buy the basic picture, the key issues will be things like 'which organisation has the most pressing room for more funding at the moment?' rather than questions about the value of the particular strategies.

Another option would be to fund research into which org can best use donations. There's a chance this could be commissioned through CEA, though we'll need to think of some ways to reduce bias!

Disclaimer: I'm the Executive Director of 80,000 Hours, which is part of CEA.

Comment author: joaolkf 27 December 2013 03:57:31PM *  9 points [-]

Nick, Anders and Toby have been consulted by government agencies in the past, particularly Nick has done that several times (even by the Thailand's government apparently). If your concern is influence over government, FHI wins, given I don't think movement building would get us as far as having a Prime Minister meeting with FHI staff. It would have to be one serious movement to match just one or two meetings. It's likely that there aren't even enough eligible brains for a "AI-risks movement" of such scale.

However, it is not the case "influence over the government" should be the most important criteria. Mainly, because right now we wouldn't even know exactly what to tell them, and it might take decades until we do. Hence, the most important criteria is level of basic research. The mere fact your question hasn't any clear answers means we need more basic research, and thus that MIRI/ FHI have preference. I couldn't know whether FHI or MIRI would be better. As a wild guess, I would say FHI does more research but that it somehow feeds from MIRI non-academic flexibility and movement building. Likely, whichever had the preference over resources would lose this preference relatively fast as it outgrew the other.

On the other hand, I have heard MIRI/FHI/CEA staff claiming they are much more in need of qualified people than money. So, if CFAR is increasing qualification then they ought to have priority. But it's not clear if they are really doing that yet.

Comment author: Benjamin_Todd 30 December 2013 08:26:52PM 0 points [-]

Note that Toby is a trustee of CEA and did most of his government consulting due to GWWC, not the FHI, so it's not clear that FHI wins out in terms of influence over government.

Moreover, if your concern is influence over government, CEA could still beat FHI (even if FHI is doing very high level advocacy) by acting as a multiplier on the FHI's efforts (and similar orgs): $1 donated to CEA could lead to more than $1 of financial or human capital delivered to the FHI or similar. I'm not claiming this is happening, but just pointing out that it's too simple to say FHI wins out just because they're doing some really good advocacy.

Disclaimer: I'm the Executive Director of 80,000 Hours, which is part of CEA.

Comment author: benkuhn 05 December 2013 01:17:17AM *  3 points [-]

Hi Ben,

Thanks for responding. I've responded to points below.

Poor cause choices

There's a lot being done on this front: * GiveWell is running Labs, and Holden has said he expects to find better donation opportunities in the next few years outside of global health * CEA is an advocate of further cause prioritisation research, and is about to hire Owen Cotton-Barratt, to work full-time on it. * 80k is about to release a list of recommended causes, which will not have global health at the top.

The point of this argument wasn't that organizations aren't working on it. In fact the existence of this research strengthens my point, which was that people are donating now anyway despite the fact that it looks like we know very little now and the attitude towards giving now vs. later seems to be "well there's a good case for either one" rather than "we really need to figure this out because we may be pouring money down the drain", which is evidence that people are stopping thinking at the level of "doesn't obviously conflict with EA principles".

Inconsistent attitude toward rigor

I think this is mainly because people use the best analysis that's out there, and the best analysis for charity is currently much more in-depth than it is for these other issues. We're trying to make progress on the other issues at 80k and CEA.

Again, the issue isn't that nobody is trying to solve these, it's that most people are way more worried about the charity analysis issue than ancillary issues that are just as important. If our knowledge of e.g. cost-effectiveness of global health interventions was as limited as our knowledge elsewhere, would people be donating to global health charities? I doubt it.

Poor psychological understanding

My impression is that people at CEA have worried about these problems quite a bit. At 80k, we try to work on this problem by highlighting members who are really trying rather than rationalising what they want, which we hope will encourage good norms. We'll also consider calling people out, but it can be a delicate issue!

I've been following 80k and have not noticed this phenomenon. Can you give some examples?

Monoculture

I'm worried about this, but it's difficult to change. All we can do is try to make an active effort to reach out to new groups.

This is definitely not all we can do (unless you take a tautologically broad interpretation of "make an active effort to reach out"). For instance, if a substantial fraction of effective altruists were raging sexists, it would be wise to fix our group norms before going "hey women! there's this thing called effective altruism!"

Even supposing it is all we can do, is there anything we're actually doing about it?

EA was started by some of the smartest, most well meaning people I have ever met. It's going to be almost impossible to avoid a decline in quality of discussion as the circle is widened.

The point of the critique was not to list easily avoidable problems, but to list bad problems. If decline in quality of people is inevitable, then we better find some solutions to the problems it brings (e.g. epistemic inertia), or the decline of EA is inevitable too.

Comment author: Benjamin_Todd 05 December 2013 02:59:46PM -1 points [-]

Read the response to poor cause choice and inconsistent attitude toward rigor as "while some EAs might be donating without enough thought, lots of others are investing most of their resources in doing more research"

The monoculture problem is something we often think about how to fix at 80k. We haven't come up with great solutions yet though.

I also argued that the decline in the FB group is not obviously important. And if it's difficult to avoid, but many movements started by a small group of smart people nevertheless go on to achieve a lot, that's also evidence that it's not important.

Comment author: Benjamin_Todd 03 December 2013 08:00:55PM 2 points [-]

Hi Ben,

Thanks for the post. I think this is an important discussion. Though I'm also sympathetic to Nick's comment that a significant amount of extra self-reflection is not the most important thing to EA's success.

I just wanted to flag that I think there are attempts to deal with some of these issues, and explain why I think some of these issues are not a problem.

Philosophical difficulties

Effective altruism was founded by philosophers, so I think there's enough effort going into this, including population ethics. (See Nick's comment)

Poor cause choices

There's a lot being done on this front: * GiveWell is running Labs, and Holden has said he expects to find better donation opportunities in the next few years outside of global health * CEA is an advocate of further cause prioritisation research, and is about to hire Owen Cotton-Barratt, to work full-time on it. * 80k is about to release a list of recommended causes, which will not have global health at the top.

Non-obviousness

I think the more useful framing of this problem is 'what's the competitive advantage that has let us come up with these ideas rather than anyone else?' I think more work on this question would be useful. This also deals with the efficient markets problem. If you don't have an answer to this question, I agree you should be worried.

I've thought about it in the context of 80k, and have some ideas (unfortunately I haven't had time to write about them publicly). I now think the bigger priority is just to try out 80k and see how well it works. More generally, we try to take our disagreements with elite common sense very seriously.

I don't think recency is a problem. It seems reasonable that EA could only develop after we had things like the internet, good quality trial data of different interventions, and Singer's pond argument (which required a certain level of global inequality and globalization), which are all relatively recent.

Inconsistent attitude toward rigor

I think this is mainly because people use the best analysis that's out there, and the best analysis for charity is currently much more in-depth than it is for these other issues. We're trying to make progress on the other issues at 80k and CEA.

Poor psychological understanding

My impression is that people at CEA have worried about these problems quite a bit. At 80k, we try to work on this problem by highlighting members who are really trying rather than rationalising what they want, which we hope will encourage good norms. We'll also consider calling people out, but it can be a delicate issue!

Monoculture

I'm worried about this, but it's difficult to change. All we can do is try to make an active effort to reach out to new groups.

Community problems

I don't see the decline in quality of the FB group as a problem. EA was started by some of the smartest, most well meaning people I have ever met. It's going to be almost impossible to avoid a decline in quality of discussion as the circle is widened.

I'll also push back against equating the community with the FB group. There are efforts by other EA groups to build better venues for the community e.g. the EA Summit by Leverage. We don't even need a good FB group so long as there are other ways for people to form projects (e.g. speak to 80k's careers coaches) and get good information (read GiveWell's research).

View more: Next