JonahSinick comments on Earning to Give vs. Altruistic Career Choice Revisited - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (154)
The top considerations that come into play when I advise someone whether to earn-to-give or work directly on x-risk look like this:
1) Does this person have a large comparative advantage at the direct problem domain? Top-rank math talent can probably do better at MIRI than at a hedge fund, since there are many mathematical talents competing to go into hedge funds and no guarantee of a good job, and the talent we need for inventing new basic math does not translate directly into writing the best QT machine learning programs the fastest.
2) Is this person going to be able to stay motivated if they go off on their own to earn-to-give, without staying plugged into the community? Alternatively, if the person's possible advantage is at a task that requires a lot of self-direction, will they be able to stay on track without requiring constant labor to keep them on track, since that kind of independent job is much harder to stick at then a 9-to-5 office job with supervision and feedback and cash bonuses?
Every full-time employee at a nonprofit requires at least 10 unusually generous donors or 1 exceptionally generous donor to pay their salary. For any particular person wondering how they should help this implies a strong prior bias toward earning-to-give. There are others competing to have the best advantage for the nonprofit's exact task, and also there are thousands of job opportunities out there that are competing to be the maximally-earning use of your exact talents - best-fits to direct-task-labor vs. earning-to-give should logically be rare, and they are.
The next-largest issue is motivation, and here again there are two sides to the story. The law student who goes in wanting to be an environmentalist (sigh) and comes out of law school accepting the internship with the highest-paying firm is a common anecdote, though now that I come to write it down, I don't particularly know of any gathered data. Earning to give can impose improbability in the form of likelihood that the person will actually give. Conversely, a lot of the most important work at the most efficient altruistic organizations is work that requires self-direction, which is also demanding of motivation.
I should pause here to remark that if you constrain yourself to 'straightforward' altruistic efforts in which the work done is clearly understandable and repeatable and everyone agrees on how wonderful it is, you will of course be constraining yourself very far away from the most efficient altruism - just like a grant committee that only wants to fund scientific research with a 100% chance of paying off in publications and prestige, or a VC that only wanted to fund companies that were certain to be defensible-appearing decisions, or someone who constrained their investments to assets that had almost no risk of going down. You will end up doing things that are nearly certain never to appear to future historians as a decisive factor in the history of Earth-originating intelligent life; this requires tolerance for not just risk but scary ambiguity. But if you want to work on things that might actually be decisive, you will end up in mostly uncharted territory doing highly self-directed work, and many people cannot do this. Just as many other people cannot sustain altruism without being surrounded by other altruists, but this can possibly be purchased elsewhere via living on the West or East Coast and hanging around with others who are earning-to-give or working directly.
These are the top considerations when someone asks me whether they should work directly or earn to support others working directly - the low prior, whether the exact fit of talent is great enough to overcome that prior, and whether the person can sustain motivation / self-direct.
I'll also highight another point implicit in my post: even if one assumes that there's not enough funding in the nonprofit world for the projects of highest value, there may be such funding available in other contexts (for-profit, academic and government). This makes the argument for earning to give weaker.
I recognize that I haven't addressed the specific subject of Friendly AI research, and will do so in future posts.
I understand if your priorities aren't our priorities. My concrete example reflex was firing, that's all.
I think that there's substantial overlap between my values and MIRI staff's values, and that the difference regarding the relative value of "earning to give" is epistemic rather than normative. But obviously there's a great deal more that needs to be said about the epistemic side, with reference to the concrete example of Friendly AI.
I can imagine someone thinking that FHI was a better use of money than MIRI, or CFAR, or CSER, or the Foresight Institute, or brain-scanning neuroscience, or rapid-response vaccines, or any number of startups, but considering AMF as being in the running at all seems to require either a value difference or really really different epistemics about what affects the fate of future galaxies.
Realistic amounts of difference in epistemics + the "humans best stick to the mainline probability" heuristic seem enough (where by "realistic" I mean "of the degree actually found in the world"). I.e., I honestly believe that there are many people out there who would care the hell about the fate of future galaxies if they alieved that they had any non-vanishing chance of significantly influencing that fate (and to choose the intervention that influences it in the desired direction).
If you're one of 10^11 sentients to be born on Ancient Earth with a golden opportunity to influence a roughly 10^80-sized future, what exactly is a 'vanishing chance'... eh, let's all save it until later.
I meant that the alieved probability is small in absolute terms, not that it is small compared to the payoff. That's why I mentioned the "stick to the mainline probability" heuristic. I really do believe that there are many people who, if they alieved that they (or a group effort they could join) could change the probability of a 10^80-sized future by 10%, would really care; but who do not alieve that the probability is large enough to even register, as a probability; and whose brains will not attempt to multiply a not-even-registering probability with a humongous payoff. (By "alieving a probability" I simply mean processing the scenario the way one's brain processes things it assigns that amount of credence, not a conscious statement about percentages.)
This is meant as a statement about people's actual reasoning processes, not about what would be reasonable (though I did think that you didn't feel that multiplying a very small success probability with a very large payoff was a good reason to donate to MIRI; in any case seems to me that the more important unreasonableness is requesting mountains of evidence before alieving a non-vanishing probability for weird-sounding things).
[ETA: I find it hard to put a number on the not-even-registering probability the sort of person I have in mind might actually alieve, but I think a fair comparison is, say, the "LHC will create black holes" thing -- I think people will tend to process both in a similar way, and this does not mean that they would shrug it off if somebody counterfactually actually did drop a mountain of evidence about either possibility on their head.]
Because on a planet like this one, there ought to be some medium-probable way for you and a cohort of like-minded people to do something about x-risk, and if a particular path seems low probability, you should look for one that's at least medium-probability instead.
Ok, fair enough. (I had misunderstood you on that particular point, sorry.)
If there was ever a reliable indicator that you're wrong about something, it is the belief that you are special to the order of 1 in 10^70.
So do you believe in the Simulation Hypothesis or the Doomsday Argument, then? All attempts to cash out that refusal-to-believe end in one or the other, inevitably.
From where I stand, it's more like arcane meta-arguments about probability are motivating a refusal-to-doubt the assumptions of a prized scenario.
Yes, I am apriori skeptical of anything which says I am that special. I know there are weird counterarguments (SIA) and I never got to the bottom of that debate. But meta issues aside, why should the "10^80 scenario" be the rational default estimation of Earth's significance in the universe?
The 10^80 scenario assumes that it's physically possible to conquer the universe and that nothing would try to stop such a conquest, both enormous assumptions... astronomically naive and optimistic, about the cosmic prospects that await an Earth which doesn't destroy itself.
Doomsday for me, I think. Especially when you consider that it doesn't mean doomsday is literally imminent, just "imminent" relative to the kind of timescale that would be expected to create populations on the order of 10^80.
In other words, it fits with the default human assumption that civilization will basically continue as it is for another few centuries or millennia before being wiped out by some great catastrophe.
Do you mind elaborating on this inevitability? It seems like there ought to be other assumptions involved. For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one. Or that they will artificially limit the number of individuals. Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts. All of these result in the rather small total number of individuals existing at any point in time.
I wonder if this argument can be made precise enough to have its premises and all the intermediate assumptions examined. I remain skeptical of any forecast that far into the future. You presumably mean your confidence in the UFAI x-risk within the next 20-100 years as the minimum hurdle to overcome, with the eternal FAI paradise to follow.
My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example, rather than as to compare with efforts to improve the far future of humanity.
I think that working in global health in a reflective and goal directed way is probably better for improving global health than "earning to give" to AMF. Similarly, I think that working directly on things that bear on the long term future of humanity is probably a better way of improving the far future of humanity than "earning to give" to efforts along these lines.
I'll discuss particular opportunities to impact the far future of humanity later on.
That depends on what you want to know, doesn't it? As far as I know the impact of AMF on x-risk, astronomical waste, and total utilons integrated over the future of the galaxies, is very poorly researched and not at all concrete. Perhaps some other fact about AMF is concrete and robustly researched, but is it the fact I need for my decision-making?
(Yes, let's talk about this later on. I'm sorry to be bothersome but talking about AMF in the same breath as x-risk just seems really odd. The key issues are going to be very different when you're trying to do something so near-term, established, without scary ambiguity, etc. as AMF.)
I'm somewhat confused by the direction that this discussion has taken. I might be missing something, but I believe that the points related to AMF that I've made are:
GiveWell's explicit cost-effectiveness estimate for AMF is much higher than the cost per DALY saved implied by the figure that MacAskill cited.
GiveWell's explicit estimates for the cost-effectiveness of the best giving opportunities in the field of direct global health interventions have steadily gotten lower, and by conservation of expected evidence, one can expect this trend to continue.
The degree of regression to the mean observed in practice suggests that there's less variance amongst the cost-effectiveness of giving opportunities than may initially appear to be the case.
By choosing an altruistic career path, one can cut down on the number of small probability failure modes associated with what you do.
I don't remember mentioning AMF and x-risk reduction together at all. I recognize that it's in principle possible that the "earning to give" route is better for x-risk reduction than it is for improving global health, but I believe the analogy between the two domains is sufficiently strong that my remarks on AMF have relevance (on a meta-level, not on an object level).
Yeah, I also have the feeling that I'm questioning you improperly in some fashion. I'm mostly driven by a sense that AMF is very disanalogous to the choices that face somebody trying to optimize x-risk charity (or rather total utilons over all future time, but x-risk seems to be the word we use for that nowadays). It seems though that we're trying to have a discussion in an ad-hoc fashion that should be tabled and delayed for explicit discussion in a future post, as you say.
If I may list some differences I perceive between AMF and MIRI:
Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I'm currently giving a little money to Greenpeace, despite being increasingly certain that it's far, far from the best choice.
Not really related to the current discussion, but I want to make sure I understand the above statement. Is this assuming that the trend has not already been taken into account in forming the estimates?
Yes — the cost-effectiveness estimate has been adjusted every time a new issue has arisen, but on a case by case basis, without an attempt to extrapolate based on the historical trend.
I tend to think that if one can make a for-profit entity, that's the best sort of vehicle to pursue most tasks, though occasionally, churches or governments have some value too.