Over the past six months I've been repeatedly going back and forth on my attitude toward the value of short-term and/or exclusive focus on existential risk. Here I'll offer some reasons why a utilitarian who recognizes the upside of preventing human extinction may refrain from a direct focus on existential risk reduction. I remain undecided on my attitude toward short-term and/or exclusive focus on existential risk - this article is not rhetorical in intent; I'm just throwing some relevant issues out there.
1. On the subject of FAI research, Prase stated that:
The whole business is based on future predictions of several tens or possibly hunderts years in advance, which is historically a very unsuccessful discipline. And I can't help but include it in that reference class.
The same can be said of much of the speculation concerning existential risk in general, not so much existential risk due to asteroid strike or Venusian global warming but rather with the higher probability but much more amorphous existential risks connected with advanced technologies (general artificial intelligence, whole brain emulation, nano weapons, genetically engineered viruses, etc.).
A principle widely held by many highly educated people is that it's virtually impossible to predict the future more than a few decades out. Now, one can attempt to quantify "virtually impossible" as a small probability that one's model of the future is correct and multiply it by the numbers that emerge as outputs of one's model of the future in Fermi calculations, but the multiplier corresponding to "virtually impossible" may be considerably smaller than one might naively suppose...
2. As AnnaSalamon said in Goals for which Less Wrong does (and doesn't) help,
conjunctions are unlikely
Assuming that A and B are independent events, the probability of their conjunction is p(A)p(B). So for example, an event that's the conjunction of n independent events each with probability 0.1 occurs with probability 10-n. As humans are systematically biased toward believing that conjunctions are more likely than their conjuncts (at least in certain setting), there's a strong possibility of exponentially overestimating probabilities in the course of Fermi calculations. This is true both of the probability that one's model is correct (given the amount of uncertainty involved in the future as reflected by historical precedent) and of the individual probabilities involved assuming that one's model is correct.
Note that I'm not casting doubt on the utility of Fermi calculations as a general matter - Carl Shulman has been writing an interesting series of posts arguing that one can use Fermi calculations to draw reasonable conclusions about political advocacy as philanthropy. However, Carl's posts have been data-driven in a much stronger sense than Fermi calculations about the probabilities of technologically driven existential risks have been.
3. While the efficient market hypothesis may not hold in the context of philanthropy, it's arguable that the philanthropic world is efficient given the human resources and social institutions that are on the table. Majoritarianism is epistemically wrong, but society is quite rigid and whether or not successful advocacy of a particular cause is tenable depends in some measure on whether society is ready for it. In Public Choice and the Altruist's Burden Roko wrote
I personally have suffered, as have many, from low-level punishment from and worsening of relationships with my family, and social pressure from friends; being perceived as weird. I have also become more weird - spending one's time optimally for social status and personal growth is not at all like spending one's time in a way so as to reduce existential risks. Furthermore, thinking that the world is in grave danger but only you and a select group of people understand makes you feel like you are in a cult due to the huge cognitive dissonance it induces.
Even when epistemically justified in the abstract, focus on fringe causes may take too much of a psychological toll on serious supporters in order for serious supporters to be effective in pursuing their goals. To the extent that focus on existential risk requires radical self sacrificing altruism there are dangers of the type described in a comment by Carl Shulman:
Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.
4. Because of the upside of ensuring the survival rate is so huge, there's an implicit world view among certain people on Less Wrong that, e.g. existential risk reduction charities offer the opportunities for optimal philanthropy. I think that existential risk reduction charities may offer opportunities for optimal philanthropy, but that the premise that this is so largely independently of the quality of the work that these charities are doing is essentially parallel to the premise behind Pascal's Wager. In Making your explicit reasoning trustworthy Anna Salamon wrote
I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things. [...] examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. [...] examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse.
Use raw motivation, emotion, and behavior to determine at least part of your priorities.
I'm not able to offer a strong logical argument against the use of Pascal's wager or infinite ethics but nevertheless feel right to reject them as absurd. Similarly, though I'm unable to offer a strong logical argument for doing so (although I've listed some of the relevant intuitions above), I feel right to restrict support to existential risk reduction opportunities that meet some minimal standard for "sufficiently well-conceived and compelling" well above that of multiplying the value of ensuring human survival by a crude guess as to the probability that a given intervention will succeed.
Intuitively, the position "it doesn't matter how well executed charity X's activities are; since charity X is an existential risk reduction charity, charity X triumphs non-existential risk charities" is for me a reductio ad absurdem for adopting a conscious, explicit, single-minded focus on existential risk reduction.
Disclaimer: I do not intend for my comments about the necessity of meeting a minimal standard to apply specifically to any existential risk reduction charity on the table. I have huge uncertainties as to the significance of most of the points that I make in this post. Depending on one's assessment of their significance one could end up either in favor or against short-term and/or explicit focus on existential risk reduction
I am interested in the trade-off between directing funds/energy towards explicitly addressing existential risk and directing funds/energy towards education. In anything but the very near term, the number of altruistic, intelligent rationalists appears to be an extremely important determinant of prosperity, chance of survival, etc. There also appears to be a lot of low hanging fruit, both related to improving the rationality of exceptionally intelligent individuals and increasing the number of moderately intelligent individuals who become exceptionally intelligent.
Right now, investment (especially of intelligent rationalist's time) in education seems much more valuable than direct investment in existential risk reduction.
Eliezer's assessment seems to be that the two projects have approximately balanced payoffs, so that spending time on either at the expense of the other is justified. Is this correct? How do other people here feel?
It seems to me that increasing the number of altruistic, intelligent rationalists via education is just a means of explicitly addressing existential risk, so your comment, while interesting, is not directly relevant to multifoliaterose's post.