[LINK] John Baez Interview with astrophysicist Gregory Benford
The content of John Baez's This Week's Finds: Week 310:
Includes
- Discussion of global warming and geoengineering.
- A reference to a paper by David Wolpert and Gregory Benford on Newcomb's paradox
Note: The upcoming This Week's Finds: Week 311 is an interview with Eliezer Yudkowsky by John Baez.
Some Considerations Against Short-Term and/or Explicit Focus on Existential Risk Reduction
Over the past six months I've been repeatedly going back and forth on my attitude toward the value of short-term and/or exclusive focus on existential risk. Here I'll offer some reasons why a utilitarian who recognizes the upside of preventing human extinction may refrain from a direct focus on existential risk reduction. I remain undecided on my attitude toward short-term and/or exclusive focus on existential risk - this article is not rhetorical in intent; I'm just throwing some relevant issues out there.
1. On the subject of FAI research, Prase stated that:
The whole business is based on future predictions of several tens or possibly hunderts years in advance, which is historically a very unsuccessful discipline. And I can't help but include it in that reference class.
The same can be said of much of the speculation concerning existential risk in general, not so much existential risk due to asteroid strike or Venusian global warming but rather with the higher probability but much more amorphous existential risks connected with advanced technologies (general artificial intelligence, whole brain emulation, nano weapons, genetically engineered viruses, etc.).
A principle widely held by many highly educated people is that it's virtually impossible to predict the future more than a few decades out. Now, one can attempt to quantify "virtually impossible" as a small probability that one's model of the future is correct and multiply it by the numbers that emerge as outputs of one's model of the future in Fermi calculations, but the multiplier corresponding to "virtually impossible" may be considerably smaller than one might naively suppose...
2. As AnnaSalamon said in Goals for which Less Wrong does (and doesn't) help,
conjunctions are unlikely
Assuming that A and B are independent events, the probability of their conjunction is p(A)p(B). So for example, an event that's the conjunction of n independent events each with probability 0.1 occurs with probability 10-n. As humans are systematically biased toward believing that conjunctions are more likely than their conjuncts (at least in certain setting), there's a strong possibility of exponentially overestimating probabilities in the course of Fermi calculations. This is true both of the probability that one's model is correct (given the amount of uncertainty involved in the future as reflected by historical precedent) and of the individual probabilities involved assuming that one's model is correct.
Note that I'm not casting doubt on the utility of Fermi calculations as a general matter - Carl Shulman has been writing an interesting series of posts arguing that one can use Fermi calculations to draw reasonable conclusions about political advocacy as philanthropy. However, Carl's posts have been data-driven in a much stronger sense than Fermi calculations about the probabilities of technologically driven existential risks have been.
3. While the efficient market hypothesis may not hold in the context of philanthropy, it's arguable that the philanthropic world is efficient given the human resources and social institutions that are on the table. Majoritarianism is epistemically wrong, but society is quite rigid and whether or not successful advocacy of a particular cause is tenable depends in some measure on whether society is ready for it. In Public Choice and the Altruist's Burden Roko wrote
I personally have suffered, as have many, from low-level punishment from and worsening of relationships with my family, and social pressure from friends; being perceived as weird. I have also become more weird - spending one's time optimally for social status and personal growth is not at all like spending one's time in a way so as to reduce existential risks. Furthermore, thinking that the world is in grave danger but only you and a select group of people understand makes you feel like you are in a cult due to the huge cognitive dissonance it induces.
Even when epistemically justified in the abstract, focus on fringe causes may take too much of a psychological toll on serious supporters in order for serious supporters to be effective in pursuing their goals. To the extent that focus on existential risk requires radical self sacrificing altruism there are dangers of the type described in a comment by Carl Shulman:
Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.
4. Because of the upside of ensuring the survival rate is so huge, there's an implicit world view among certain people on Less Wrong that, e.g. existential risk reduction charities offer the opportunities for optimal philanthropy. I think that existential risk reduction charities may offer opportunities for optimal philanthropy, but that the premise that this is so largely independently of the quality of the work that these charities are doing is essentially parallel to the premise behind Pascal's Wager. In Making your explicit reasoning trustworthy Anna Salamon wrote
I find I hesitate when pondering Pascal’s wager, infinite ethics, the Simulation argument, and whether I’m a Boltzmann brain... because I’m afraid of losing my bearings, and believing mistaken things. [...] examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse. [...] examples abound of folks whose theories and theorizing (as contrasted with their habits, wordless intuitions, and unarticulated responses to social pressures or their own emotions) made significant chunks of their actions worse.
Use raw motivation, emotion, and behavior to determine at least part of your priorities.
I'm not able to offer a strong logical argument against the use of Pascal's wager or infinite ethics but nevertheless feel right to reject them as absurd. Similarly, though I'm unable to offer a strong logical argument for doing so (although I've listed some of the relevant intuitions above), I feel right to restrict support to existential risk reduction opportunities that meet some minimal standard for "sufficiently well-conceived and compelling" well above that of multiplying the value of ensuring human survival by a crude guess as to the probability that a given intervention will succeed.
Intuitively, the position "it doesn't matter how well executed charity X's activities are; since charity X is an existential risk reduction charity, charity X triumphs non-existential risk charities" is for me a reductio ad absurdem for adopting a conscious, explicit, single-minded focus on existential risk reduction.
Disclaimer: I do not intend for my comments about the necessity of meeting a minimal standard to apply specifically to any existential risk reduction charity on the table. I have huge uncertainties as to the significance of most of the points that I make in this post. Depending on one's assessment of their significance one could end up either in favor or against short-term and/or explicit focus on existential risk reduction
Friendly AI Research and Taskification
Eliezer has written a great deal about the concept of Friendly AI, for example in a document from 2001 titled Creating Friendly AI 1.0. The new SIAI overview states that:
SIAI's primary approach to reducing AI risks has thus been to promote the development of AI with benevolent motivations which are reliably stable under self-improvement, what we call “Friendly AI” [22].
The SIAI Research Program lists under its Research Areas:
Mathematical Formalization of the "Friendly AI" Concept. Proving theorems about the ethics of AI systems, an important research goal, is predicated on the possession of an appropriate formalization of the notion of ethical behavior on the part of an AI. And, this formalization is a difficult research question unto itself.
Despite the enormous value that the construction of a Friendly AI would have; at present I'm not convinced that researching the Friendly AI concept is a cost-effective way of reducing existential risk. My main reason for doubt is that as far as I can tell, the problem of building a Friendly AI has not been taskified to a sufficiently fine degree for it to be possible to make systematic progress toward obtaining a solution. I'm open-minded on this point and quite willing to change my position subject to incoming evidence
Efficient Charity
I wrote this article in response to Roko's request for an article about efficient charity. As a disclosure of a possible conflict of interest I'll note that I have served as a volunteer for GiveWell. Last edited 12/06/10.
Charitable giving is widely considered to be virtuous and admirable. If statistical behavior is any guide, most people regard charitable donations to be worthwhile expenditures. In 2001 a full 89% of American households donated money to charity and during 2009 Americans donated $303.75 billion to charity [1].
A heart-breaking fact about modern human experience is that there's little connection between such generosity and positive social impact. The reason why humans evolved charitable tendencies is because such tendencies served as marker to nearby humans that a given individual is a dependable ally. Those who expend their resources to help others are more likely than others to care about people in general and are therefore more likely than others to care about their companions. But one can tell that people care based exclusively on their willingness to make sacrifices independently of whether these sacrifices actually help anybody.
Modern human society is very far removed from our ancestral environment. Technological and social innovations have made it possible for us to influence people on the other side of the globe and potentially to have a profound impact on the long term survival of the human race. The current population of New York is ten times the human population of the entire world in our ancestral environment. In view of these radical changes it should be no surprise that the impact of a typical charitable donation falls staggeringly short of the impact of donation optimized to help people as much as possible.
While this may not be a problem for donors who are unconcerned about their donations helping people, it's a huge problem for donors who want their donations to help people as much as possible and it's a huge problem for the people who lose out on assistance because of inefficiency in the philanthropic world. Picking out charities that have high positive impact per dollar is a task no less difficult than picking good financial investments and one that requires heavy use of critical and quantitative reasoning. Donors who wish for their donations to help people as much as possible should engage in such reasoning and/or rely on the recommendations of trusted parties who have done so.
DRAFT: Three Intellectual Temperaments: Birds, Frogs and Beavers
Here is a draft of a potential top-level post which I'd welcome feedback on. I would appreciate any suggestions, corrections, additional examples, qualifications, or refinements.
Monetary Incentives and Performance
I've been thinking about incorporating my Vanity and Ambition in Mathematics into a top level posting. If possible I would like to situation my remarks and the quotations that I cite with respect to the existing experimental psychology literature. When I've discussed the material in the aforementioned article with people in psychology they've sometimes made reference to recent findings that monetary incentives reduce performance on certain kinds of tasks, perhaps suggesting that intrinsic rather than extrinsic motivation is key for performance on certain kinds of tasks.
I'll do my own research, but does anybody know of any relevant studies?
Beauty in Mathematics
Serious mathematicians are often drawn toward the subject and motivated by a powerful aesthetic response to mathematical stimuli. In his essay on Mathematical Creation, Henri Poincare wrote
It may be surprising to see emotional sensibility invoked à propos of mathematical demonstrations which, it would seem, can interest only the intellect. This would be to forget the feeling of mathematical beauty, of the harmony of numbers and forms, of geometric elegance. This is a true aesthetic feeling that all real mathematicians know, and surely it belongs to emotional sensibility.
The prevalence and extent of the feeling of mathematical beauty among mathematicians is not well known. In this article I'll describe some of the reasons for this and give examples of the phenomenon. I've excised many of the quotations in this article from the extensive collection of quotations compiled by my colleague Laurens Gunnarsen.
Vanity and Ambition in Mathematics
In my time in the mathematical community I've formed the subjective impression that it's noticeably less common for mathematicians of the highest caliber to engage in status games than members of the general population do. This impression is consistent with the modesty that comes across in the writings of such mathematicians. I record some relevant quotations below and then discuss interpretations of the situation.
Acknowledgment - I learned of the Hironaka interview quoted below from my colleague Laurens Gunnarsen.
Edited 10/12/10 to remove the first portion of the Hironaka quote which didn't capture the phenomenon that I'm trying to get at here.
Great Mathematicians on Math Competitions and "Genius"
As I mentioned in Fields Medalists on School Mathematics, school mathematics usually gives a heavily distorted picture of mathematical practice. It's common for bright young people to participate in math competitions, an activity which is closer to that of mathematical practice. Unfortunately, while math competitions may be more representative of mathematical practice than school mathematics, math competitions are themselves greatly misleading. Furthermore, they've become tied to a misleading mythological conception of "genius." I've collected relevant quotations below.
Acknowledgment - I obtained some of these quotations from a collection of mathematician quotations compiled by my colleague Laurens Gunnarsen.
Fields Medalists on School Mathematics
Most people form their impressions of math from their school mathematics courses. The vast majority of school mathematics courses distort the nature of mathematical practice and so have led to widespread misconceptions about the nature of mathematical practice. There's a long history of high caliber mathematicians finding their experiences with school mathematics alienating or irrelevant. I think this should be better known. Here I've collected some relevant quotes.
I'd like to write some Less Wrong articles diffusing common misconceptions about mathematical practice but am not sure how to frame these hypothetical articles. I'd welcome any suggestions.
Acknowledgment - I obtained some of these quotations from a collection of mathematician quotations compiled by my colleague Laurens Gunnarsen.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)