loup-vaillant comments on Earning to Give vs. Altruistic Career Choice Revisited - Less Wrong

34 Post author: JonahSinick 02 June 2013 02:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 29 May 2013 01:36:04AM 6 points [-]

Yeah, I also have the feeling that I'm questioning you improperly in some fashion. I'm mostly driven by a sense that AMF is very disanalogous to the choices that face somebody trying to optimize x-risk charity (or rather total utilons over all future time, but x-risk seems to be the word we use for that nowadays). It seems though that we're trying to have a discussion in an ad-hoc fashion that should be tabled and delayed for explicit discussion in a future post, as you say.

Comment author: loup-vaillant 29 May 2013 12:48:24PM *  6 points [-]

If I may list some differences I perceive between AMF and MIRI:

  • AMF's impact is quite certain. MIRI's impact feels more like a long shot —or even a pipe dream.
  • AMF's impact is sizeable. MIRI's potential impact is astronomic.
  • AMF's impact is immediate. MIRI's impact is long term only.
  • AMF's have photos of children. MIRI have science fiction.
  • In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.

Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I'm currently giving a little money to Greenpeace, despite being increasingly certain that it's far, far from the best choice.

Comment author: elharo 29 May 2013 01:59:43PM 1 point [-]

One more difference:

AMF's impact is very likely to be net positive for the world under all reasonable hypotheses.

MIRI appears to me to have a chance to be massively net negative for humanity. I.e. if AI of the level they predict is actually possible, MIRI might end up creating or assisting in the creation of UFAI that would not otherwise be created, or perhaps not created as soon.

Comment author: Eliezer_Yudkowsky 29 May 2013 05:20:38PM 9 points [-]

But what if AMF saves a child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions?

If you try hard enough, you can tell a story where any effort to accomplish X somehow turns out to accomplish ~X, but one must distinguish possibility from the balance of probability.

Comment author: wedrifid 30 May 2013 08:16:04AM 1 point [-]

AMF's impact is very likely to be net positive for the world under all reasonable hypotheses.

That seems like a bizarre belief to hold. Or perhaps just overwhelmingly shortsighted. There are certainly reasonable hypotheses in which more people alive right now result in worse outcomes a single generation down the line, without even considering extinction level threats and opportunities. The world isn't nearly easy enough to model and optimize for us to be that certain a disruptive influence on that scale will be a net positive under all reasonable hypotheses.

Comment author: elharo 30 May 2013 10:27:34AM 0 points [-]

Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person's life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive. Can you really defend a situation in which it is preferable to have living people today die from malaria?

The problem with MIRI-hypothesized AI (beyond its implausibility) is that we don't get to sum over all possible results. We get one result. Even if the chance of a good result is 80%, the chance of a disastrous result is still way too high for comfort.

Comment author: wedrifid 31 May 2013 09:24:20AM *  2 points [-]

Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person's life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive.

Most obviously it could cause an increase in world GDP without a commensurate acceleration in various risk prevention mechanisms. Species can evolve themselves to extinction and in a similar way humans could easily develop themselves to extinction if they are not careful or lucky. Messing around with various aspects of the human population would influence this... in one direction or another. It's damn hard to predict.

Having a heuristic "short term lives saved == good" is useful. It massively simplifies calculations and if you have no information either way about side effects of the influence then it works well enough. But it would a significant epistemic error to mistake the heuristic for operating under uncertainty with confidence about the unpredictable (or difficult to predict) system in which you are operating.

Can you really defend a situation in which it is preferable to have living people today die from malaria?

What is socially defensible is not the same thing as what is accurate. But that isn't the point here. All else being equal I would prefer AMF to have an extra million dollars to spend than to not have that extra million dollars. The expected value is positive. What I criticise is "very likely under all reasonable hypotheses" which is just way off. I do not have the epistemic resources to arrive at that confidence and I believe that you are arriving at that conclusion in error, not because of additional knowledge or probabilistic computational resources.

Comment author: Kawoomba 30 May 2013 08:27:03AM *  0 points [-]

In fact, I'd expect AMF to have a net-negative impact (and a large one at that) a few decades down the line, unless there are unrealistic, unprecedented, imperialistic-in-scope, gigantic efforts to educate and provide for the dozen then-adult children (and their dozen children) a saved-from-malaria child can typically have.

Here's Tom Friedman in his recent "Tell Me How This Ends" column:

I’ve been traveling to Yemen, Syria and Turkey to film a documentary on how environmental stresses contributed to the Arab awakening. As I looked back on the trip, it occurred to me that three of our main characters — the leaders of the two Yemeni [different countries, same dynamic] villages that have been fighting over a single water well and the leader of the Free Syrian Army in Raqqa Province, whose cotton farm was wiped out by drought — have 36 children among them: 10, 10 and 16.

It is why you can’t come away from a journey like this without wondering not just who will rule in these countries but how will anyone rule in these countries?

Comment author: elharo 30 May 2013 10:37:32AM 2 points [-]

Do you really want to propose that it is better to let children in poor countries die of disease now than to save them, because they might have more children later? My prior on this is that you're trolling, but if you really believe that and are willing to state it that baldly; then it might be worth having a serious conversation about population.

Comment author: Kawoomba 30 May 2013 11:00:39AM 2 points [-]

I'm not trolling. It's a very touchy subject for sure. I would certainly highly prefer a world in which AMF succeeds if it is coupled with the necessary, massive changes to deal with the consequences of AMF succeeding.

A world in which just AMF succeeds, but in which the changes to deal with the 5 or 6 additional persons for every child surviving malaria do not happen is heading towards even greater disaster. The birth rate is not a "might have more children", it's a probabilistic certainty, without the aforementioned new pseudo-imperialism.

However, the task of nation-building and uplifting civil-war ravaged tribal societies is a task that dwarfs AMF (plenty of recent examples), or even the worldwide charity budget. Yet without it, what's gonna happen, other than mass famines and other catastrophes?

I'm not talking about general Malthusian dynamics, but about countries whose population far exceeds the natural resources to support it, and which often do not offer the political environment, the infrastructure or the skills to exploit and develop what resources they have, other than trade them to the Chinese to prop up the ruling classes.

I'd expect a world in which AMF succeeds, leading to predictable tragedies on a more massive scale down the line, to be off worse than a world without AMF, with tragedies on a smaller scale. (To reiterate: A world with AMF succeeding and a long-term perspective for the survivers would be much better still.)

I'd rather contribute to charities which do not promise short-term benefits with probable long-term calamities, but rather to e.g. education projects and the development of stable civil institutions in such countries. (The picture gets fuzzied because eliminating certain disruptive diseases also has such positive externalities, but to a smaller degree.)

Comment author: [deleted] 30 May 2013 11:18:12AM 3 points [-]

This ignores the social-scientific consensus that reducing infant mortality leads to reductions in family sizes. The moral dilemma you're worried about doesn't exist.

Comment author: Kawoomba 30 May 2013 11:29:26AM *  2 points [-]

Citations needed. The relevant time horizons here are only 2-3 generations, do you suggest that societal norms will adapt faster than that (Edit: without accompanying larger efforts to build civil institutions)? The population explosion in, say, Bangladesh (1951: 42 million, 2011: 142 million) seems to suggest otherwise.

Comment author: satt 30 May 2013 11:54:07PM *  4 points [-]

Citations needed.

The phenomenon HaydnB refers to is the demographic transition, the theory of which is perhaps the best-established theory in the field of demography. Here are two highly-cited reviews of the topic.

The relevant time horizons here are only 2-3 generations, do you suggest that societal norms will adapt faster than that? The population explosion in, say, Bangladesh (1951: 42 million, 2011: 142 million) seems to suggest otherwise.

HaydnB's referring to family size, you're referring to population, and it's quite possible for the second to increase even as the first drops. This appears to be what happened in Bangladesh. I have not found any data stretching back to 1951 for completed family size in Bangladesh, but here is a paper that plots the total fertility rate from 1963 to 1996: it dropped from just under 8 to about 3½. I did find family size data going back to 1951 for neighbouring India: it fell from 6.0 in 1951 to 3.3 in 1997, with a concurrent decrease in infant mortality.

So I'm not HaydnB, but I have to answer your question with a "yes": fertility norms can change, and have changed, greatly in the course of 2-3 generations. Bangladesh's population, incidentally, is due to top out in about 40 years at ~200 million, only 40% higher than its current population.

Comment author: blogospheroid 07 June 2013 07:52:18AM 1 point [-]

if development of newer institutions is what you are interested in, you can choose to contribute to charter cities or seasteading. That would be an intermediate risk-reward option between a low risk option like AMF and high risk high reward one like MIRI/FHI.

Comment author: nshepperd 30 May 2013 02:25:21PM 0 points [-]

I'll grant that MIRI could accelerate the creation of AGI, if their efforts to educate people about UFAI risks are particularly ineffective. But as far as UFAI creation at all is concerned, there are any number of very smart idiots in the world who would love to be on the news as "the first person to program an artificial general intelligence". Or to be the first person to use a general AI to beat the stock market, as soon as enough parts of the puzzle have been worked out to make one by pasting together published math results. (Maybe a slightly more self-aware variation of AIXI-mc would do the trick.)

In my view, AGI is more or less inevitable, and MIRI is seemingly the only group publically interested in making it safe.