Eliezer_Yudkowsky comments on Earning to Give vs. Altruistic Career Choice Revisited - Less Wrong

34 Post author: JonahSinick 02 June 2013 02:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: JonahSinick 28 May 2013 08:52:04PM 1 point [-]

My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example, rather than as to compare with efforts to improve the far future of humanity.

I think that working in global health in a reflective and goal directed way is probably better for improving global health than "earning to give" to AMF. Similarly, I think that working directly on things that bear on the long term future of humanity is probably a better way of improving the far future of humanity than "earning to give" to efforts along these lines.

I'll discuss particular opportunities to impact the far future of humanity later on.

Comment author: Eliezer_Yudkowsky 28 May 2013 10:25:36PM 10 points [-]

My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example

That depends on what you want to know, doesn't it? As far as I know the impact of AMF on x-risk, astronomical waste, and total utilons integrated over the future of the galaxies, is very poorly researched and not at all concrete. Perhaps some other fact about AMF is concrete and robustly researched, but is it the fact I need for my decision-making?

(Yes, let's talk about this later on. I'm sorry to be bothersome but talking about AMF in the same breath as x-risk just seems really odd. The key issues are going to be very different when you're trying to do something so near-term, established, without scary ambiguity, etc. as AMF.)

Comment author: JonahSinick 29 May 2013 12:27:40AM *  8 points [-]

I'm somewhat confused by the direction that this discussion has taken. I might be missing something, but I believe that the points related to AMF that I've made are:

  1. GiveWell's explicit cost-effectiveness estimate for AMF is much higher than the cost per DALY saved implied by the figure that MacAskill cited.

  2. GiveWell's explicit estimates for the cost-effectiveness of the best giving opportunities in the field of direct global health interventions have steadily gotten lower, and by conservation of expected evidence, one can expect this trend to continue.

  3. The degree of regression to the mean observed in practice suggests that there's less variance amongst the cost-effectiveness of giving opportunities than may initially appear to be the case.

  4. By choosing an altruistic career path, one can cut down on the number of small probability failure modes associated with what you do.

I don't remember mentioning AMF and x-risk reduction together at all. I recognize that it's in principle possible that the "earning to give" route is better for x-risk reduction than it is for improving global health, but I believe the analogy between the two domains is sufficiently strong that my remarks on AMF have relevance (on a meta-level, not on an object level).

Comment author: Eliezer_Yudkowsky 29 May 2013 01:36:04AM 6 points [-]

Yeah, I also have the feeling that I'm questioning you improperly in some fashion. I'm mostly driven by a sense that AMF is very disanalogous to the choices that face somebody trying to optimize x-risk charity (or rather total utilons over all future time, but x-risk seems to be the word we use for that nowadays). It seems though that we're trying to have a discussion in an ad-hoc fashion that should be tabled and delayed for explicit discussion in a future post, as you say.

Comment author: loup-vaillant 29 May 2013 12:48:24PM *  6 points [-]

If I may list some differences I perceive between AMF and MIRI:

  • AMF's impact is quite certain. MIRI's impact feels more like a long shot —or even a pipe dream.
  • AMF's impact is sizeable. MIRI's potential impact is astronomic.
  • AMF's impact is immediate. MIRI's impact is long term only.
  • AMF's have photos of children. MIRI have science fiction.
  • In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.

Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I'm currently giving a little money to Greenpeace, despite being increasingly certain that it's far, far from the best choice.

Comment author: elharo 29 May 2013 01:59:43PM 1 point [-]

One more difference:

AMF's impact is very likely to be net positive for the world under all reasonable hypotheses.

MIRI appears to me to have a chance to be massively net negative for humanity. I.e. if AI of the level they predict is actually possible, MIRI might end up creating or assisting in the creation of UFAI that would not otherwise be created, or perhaps not created as soon.

Comment author: Eliezer_Yudkowsky 29 May 2013 05:20:38PM 9 points [-]

But what if AMF saves a child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions?

If you try hard enough, you can tell a story where any effort to accomplish X somehow turns out to accomplish ~X, but one must distinguish possibility from the balance of probability.

Comment author: wedrifid 30 May 2013 08:16:04AM 1 point [-]

AMF's impact is very likely to be net positive for the world under all reasonable hypotheses.

That seems like a bizarre belief to hold. Or perhaps just overwhelmingly shortsighted. There are certainly reasonable hypotheses in which more people alive right now result in worse outcomes a single generation down the line, without even considering extinction level threats and opportunities. The world isn't nearly easy enough to model and optimize for us to be that certain a disruptive influence on that scale will be a net positive under all reasonable hypotheses.

Comment author: elharo 30 May 2013 10:27:34AM 0 points [-]

Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person's life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive. Can you really defend a situation in which it is preferable to have living people today die from malaria?

The problem with MIRI-hypothesized AI (beyond its implausibility) is that we don't get to sum over all possible results. We get one result. Even if the chance of a good result is 80%, the chance of a disastrous result is still way too high for comfort.

Comment author: wedrifid 31 May 2013 09:24:20AM *  2 points [-]

Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person's life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive.

Most obviously it could cause an increase in world GDP without a commensurate acceleration in various risk prevention mechanisms. Species can evolve themselves to extinction and in a similar way humans could easily develop themselves to extinction if they are not careful or lucky. Messing around with various aspects of the human population would influence this... in one direction or another. It's damn hard to predict.

Having a heuristic "short term lives saved == good" is useful. It massively simplifies calculations and if you have no information either way about side effects of the influence then it works well enough. But it would a significant epistemic error to mistake the heuristic for operating under uncertainty with confidence about the unpredictable (or difficult to predict) system in which you are operating.

Can you really defend a situation in which it is preferable to have living people today die from malaria?

What is socially defensible is not the same thing as what is accurate. But that isn't the point here. All else being equal I would prefer AMF to have an extra million dollars to spend than to not have that extra million dollars. The expected value is positive. What I criticise is "very likely under all reasonable hypotheses" which is just way off. I do not have the epistemic resources to arrive at that confidence and I believe that you are arriving at that conclusion in error, not because of additional knowledge or probabilistic computational resources.

Comment author: Kawoomba 30 May 2013 08:27:03AM *  0 points [-]

In fact, I'd expect AMF to have a net-negative impact (and a large one at that) a few decades down the line, unless there are unrealistic, unprecedented, imperialistic-in-scope, gigantic efforts to educate and provide for the dozen then-adult children (and their dozen children) a saved-from-malaria child can typically have.

Here's Tom Friedman in his recent "Tell Me How This Ends" column:

I’ve been traveling to Yemen, Syria and Turkey to film a documentary on how environmental stresses contributed to the Arab awakening. As I looked back on the trip, it occurred to me that three of our main characters — the leaders of the two Yemeni [different countries, same dynamic] villages that have been fighting over a single water well and the leader of the Free Syrian Army in Raqqa Province, whose cotton farm was wiped out by drought — have 36 children among them: 10, 10 and 16.

It is why you can’t come away from a journey like this without wondering not just who will rule in these countries but how will anyone rule in these countries?

Comment author: elharo 30 May 2013 10:37:32AM 2 points [-]

Do you really want to propose that it is better to let children in poor countries die of disease now than to save them, because they might have more children later? My prior on this is that you're trolling, but if you really believe that and are willing to state it that baldly; then it might be worth having a serious conversation about population.

Comment author: Kawoomba 30 May 2013 11:00:39AM 2 points [-]

I'm not trolling. It's a very touchy subject for sure. I would certainly highly prefer a world in which AMF succeeds if it is coupled with the necessary, massive changes to deal with the consequences of AMF succeeding.

A world in which just AMF succeeds, but in which the changes to deal with the 5 or 6 additional persons for every child surviving malaria do not happen is heading towards even greater disaster. The birth rate is not a "might have more children", it's a probabilistic certainty, without the aforementioned new pseudo-imperialism.

However, the task of nation-building and uplifting civil-war ravaged tribal societies is a task that dwarfs AMF (plenty of recent examples), or even the worldwide charity budget. Yet without it, what's gonna happen, other than mass famines and other catastrophes?

I'm not talking about general Malthusian dynamics, but about countries whose population far exceeds the natural resources to support it, and which often do not offer the political environment, the infrastructure or the skills to exploit and develop what resources they have, other than trade them to the Chinese to prop up the ruling classes.

I'd expect a world in which AMF succeeds, leading to predictable tragedies on a more massive scale down the line, to be off worse than a world without AMF, with tragedies on a smaller scale. (To reiterate: A world with AMF succeeding and a long-term perspective for the survivers would be much better still.)

I'd rather contribute to charities which do not promise short-term benefits with probable long-term calamities, but rather to e.g. education projects and the development of stable civil institutions in such countries. (The picture gets fuzzied because eliminating certain disruptive diseases also has such positive externalities, but to a smaller degree.)

Comment author: nshepperd 30 May 2013 02:25:21PM 0 points [-]

I'll grant that MIRI could accelerate the creation of AGI, if their efforts to educate people about UFAI risks are particularly ineffective. But as far as UFAI creation at all is concerned, there are any number of very smart idiots in the world who would love to be on the news as "the first person to program an artificial general intelligence". Or to be the first person to use a general AI to beat the stock market, as soon as enough parts of the puzzle have been worked out to make one by pasting together published math results. (Maybe a slightly more self-aware variation of AIXI-mc would do the trick.)

In my view, AGI is more or less inevitable, and MIRI is seemingly the only group publically interested in making it safe.

Comment author: ESRogs 30 May 2013 02:15:08AM 0 points [-]

by conservation of expected evidence, one can expect this trend to continue

Not really related to the current discussion, but I want to make sure I understand the above statement. Is this assuming that the trend has not already been taken into account in forming the estimates?

Comment author: JonahSinick 30 May 2013 04:33:16AM 1 point [-]

Yes — the cost-effectiveness estimate has been adjusted every time a new issue has arisen, but on a case by case basis, without an attempt to extrapolate based on the historical trend.