Last chance to donate for 2011

4 multifoliaterose 30 December 2011 06:25PM

Many LW readers choose to direct their charitable donations to SingInst with a view toward reducing existential risk. Others do not, whether because they feel they lack an understanding of the relevant issues, because they value present day humans more than future humans or because they have concern as to the incentive effects that would be created by donating to SingInst at present. I personally feel that there's a strong case for saving money to donate later on account of better information being available in the future.

However, I feel cognitive dissonance attached to saving to donate later rather than now. If you are in this camp you might consider donating to GiveWell's top ranked charities. Also note that spreading the word about GiveWell promotes a culture of effective philanthropy which is likely to have spin off effect of interesting people in x-risk reduction, reducing x-risk. 

See Holden's article on last minute donations http://blog.givewell.org/2011/12/30/last-minute-donations/ :

"Of the money moved to our top charities through our website in 2010, 25% was on December 31st alone. We know that lots of people will be looking to make last-minute donations.

If you only have five minutes but you want to take advantage of the thousands of hours of work we put into finding the best giving opportunities, consider giving to our top charities. They have strong track records, accomplish a lot of good per dollar spent, and have good concrete plans for how to use additional donations.

A couple of things to keep in mind:

  • After you give, spread the word. This is the perfect time to remind people (via Facebook sharingtweeting, etc.) to give before the year ends. And people making last-minute gifts are likely to be receptive to suggestions.
  • If you have any questions, we’re here to help. We should be available by phone for most of the day, and responding to email when we’re not. (See our contact page). Our research FAQ may also be a good resource."

[Link] New arm of GiveWell Research: GiveWell Labs

14 multifoliaterose 08 September 2011 09:23PM

Announcing GiveWell Labs

We’re now launching a new initiative within GiveWell that will not be subject to either of these constraints. We plan to invest about 25% of our research time in what we’re calling GiveWell Labs: an arm of our research process that will be open to any giving opportunity, no matter what form and what sector.

Through GiveWell Labs, we will try to identify outstanding giving opportunities (whether they’re organizations or specific projects), publish rankings of these giving opportunities (separate from the top charities list we maintain using our existing research process) and try to raise money for these opportunities. Donors have pre-committed a minimum of $1 million to the GiveWell Labs initiative, meaning that we will have at least $1 million to commit to our choice of projects even if we are able to raise nothing else. (We expect to raise more if and when we find great giving opportunities; the $1 million has been committed based on donors’ trust in our ability to find such opportunities.)

Impact of India-Pakistan nuclear war on x-risk?

6 multifoliaterose 03 September 2011 05:14AM

Last month I was involved in a conversation thread about what the impact of a hypothetical nuclear war would be on existential risk.

There are many potential nuclear war scenarios which would have varying impacts on existential risk. It's difficult to know where to start to gain an understanding of the long-term of nuclear proliferation.

For concreteness, consider the case of an India-Pakistan nuclear war.

According to Local Nuclear War, Global Suffering by Robock and Toon,

India and Pakistan, long at odds, have more than 50 nuclear warheads apiece; if each country dropped that many bombs on cities and industrial areas, the smoke from fires would stunt agriculture worldwide for 10 years. 

[...]

1 billion people worldwide with marginal food supplies today could die of starvation because of ensuing agricultural collapse

Note that this would presumably cause some degree of chaos in the developed world.

I have not yet investigated the credibility of the papers' claims. However,

Suppose that an all-out nuclear war between India and Pakistan were to occur and were to result in climate change killing 1 billion people. Then would the probability of a positive singularity increase or decrease and if so why?

This question seems very difficult to answer; maybe altogether too difficult for humans to answer. I welcome responses raising relevant considerations even in absence of a good way to compare the relevant considerations. Please read the linked conversation thread before commentating.

[LINK] Brief Discussion of Asteroid & Nuclear Risk from paper by Hellman

5 multifoliaterose 17 August 2011 08:07PM

From Risk Analysis of Nuclear Deterrence by Martin Hellman. See also http://nuclearrisk.org/

A full-scale nuclear war is not the only threat to humanity’s continued existence, and we should allocate resources commensurate with the various risks. A large asteroid colliding with the Earth could destroy humanity in the same way it is believed the dinosaurs disappeared 65 million years ago. Such NEO (near earth object) extinction events have a failure rate on the order of 10-8 per year [Chapman & Morrison 1994].

During one century, that failure rate corresponds to one chance in a million of humanity being destroyed. While 10-6 is a small probability, the associated cost is so high—infinite from our perspective—that some might argue that a century is too long a delay before working to reduce the threat. Fortunately, significant threat reduction has recently occurred. Over the last 20 years, NASA’s Spaceguard effort is believed to have found all such potentially hazardous large asteroids, and none is predicted to strike Earth within the next century. With a hundred-year safety window in place, resolution of later potential impacts can be deferred for a few decades until our technology is significantly enhanced. Comets also pose a threat, and their more eccentric orbits make them harder to catalog, but their lower frequency of Earth impact makes the associated risk acceptable for a limited period of time.

[...]

While much less accurate than the in-depth studies proposed herein, it is instructive to estimate the failure rate of deterrence due to just one failure mechanism, a Cuban Missile Type Crisis (CMTC). Because it neglects other trigger mechanisms such as command-and-control malfunctions and nuclear terrorism, this appendix underestimates the threat. This simplified analysis uses the time-invariant model described in footnote 3. It also assumes that the experience of the first 50 years of deterrence can be extended into the future.

[...]

Since conditional probabilities were used, they can be multiplied, yielding an estimated range of (2•10-4, 5•10-3) for [...] the failure rate of deterrence based on just this one failure mechanism. The upper limit 5•10-3 is within a factor of two of my estimate that the failure rate of deterrence from all sources is on the order of one percent per year, and even the lower limit is well above the level that any engineering design review would find acceptable.

Decimal digit computations as a testing ground for reasoning about probabilities

4 multifoliaterose 15 July 2011 04:19AM

Points in this article emerged from a conversation with Anna Salamon

I think that thinking about decimal expansions of real numbers provides a good testing ground for one's intuition about probabilities. The context of computation is very different from most of the contexts that humans deal with; in particular it's much cleaner. As such, this testing ground should not be used in isolation; the understanding that one reaps from it needs to be integrated with knowledge from other contexts. Despite its limitations I think that it has something to add.

Given a computable real number x, a priori the probability that any string of n decimal digits comprises the first n decimal digits of x is 10-n. For concreteness, we'll take x to be pi. It has long been conjectured that pi is a normal number. This is consistent with the notion that the digits of pi are "random" in some sense, and in this respect pi contrasts with (say) rational numbers and Liouville's constant.

According to the Northwestern University homepage, pi has been computed to five trillion of digits. So to the extent that one trusts the result of the computation; there exists an example of a statement which had an a priori probability of 10-n  with n > 5•1012 of being true which we now know to be true with high confidence. How much should we trust the computation? Well, I don't know whether it's been verified independently and there are a variety of relevant issues about which I know almost nothing (coding issues; hardware issues; the degree of rigor with which the algorithm used has been proven to be correct, etc.). One would have more confidence if one knew that several independent teams had succeeded in verifying the result using different algorithms & hardware. One would have still more confidence if one were personally involved in such a team and became convinced of the solidity of the methods used. Regardless:

(a) As early as 1962 mathematicians had computed pi to 105 digits. Presumably since then their computation has been checked many times over by a diversity of people and methods. Trusting a single source is still problematic as there may have been a typo or whatever, but it seems uncontroversial to think that if one uses the nearest apparently reliable computational package (say, Wolfram Alpha) then chance that the output is correct is > 10%. Thus we see how an initial probability estimate of 10-100000 can rise to a probability over 10-1 in practice.

(b) If one was determined one could probably develop ~90% confidence the accuracy over a billion digits of pi. I say this because it's been over 20 years since computational power and algorithms have permitted such a vast computation; presumably by studying, testing and tweaking all programs written since then, one could do many checks on the accuracy of each of the first billion digits. Assuming that this is possible, an initial probability estimate of 10-1000000000 can in practice grow as large as  > 0.9.

This shows that probabilities which are apparently very small can rapidly shift to being quite large with the influx of new information. There's more that I could say about this but I think that the chunk that I've written so far is enough to warrant posting and that the rest of my thoughts are sufficiently ill-formed so that I shouldn't try to say more right now. I welcome thoughts and comments.

Efficient philanthropy: local vs. global approaches

8 multifoliaterose 16 June 2011 04:21AM

Edit: Carl Shulman made some remarks that have caused me to seriously question the soundness of the final section of this post. More on this at the end of the post.

Consider the following two approaches to philanthropy:

The “local” approach (associated with "satisficing") is to consider those philanthropic opportunities that are "close to oneself" in some sense (immediately salient, within one's own communities, in one's domains of expertise). The “global” approach (associated with "maximizing") is to survey the philanthropic landscape, search for the best philanthropic opportunities in absolute terms, and devote oneself to those.

In practice nobody's approach to philanthropy is genuinely entirely global; one is necessarily limited by the fact that the range of possibilities that are salient to oneself is smaller than the total range of possibilities and by the fact that one has limited computational power. But there is nevertheless substantial variation in the degree to which individuals' approaches to philanthropy are global or local.

Here I'll compare the pros and cons of the local approach and the global approach and attempt to arrive at some sort of synthesis of the two.

Disclosure: I volunteer for GiveWell and may work for them in the future.

continue reading »

Model Uncertainty, Pascalian Reasoning and Utilitarianism

23 multifoliaterose 14 June 2011 03:19AM

Related to: Confidence levels inside and outside an argument, Making your explicit reasoning trustworthy

A mode of reasoning that sometimes comes up in discussion of existential risk is the following.


Person 1: According to model A (e.g. some Fermi calculation with probabilities coming from certain reference classes), pursuing course of action X will reduce existential risk by 10-5 ; existential risk has an opportunity cost of 1025 DALYs (*), therefore model A says the expected value of pursuing course of action X is 1020 DALYs. Since course of action X requires 109 dollars, the number of DALYs saved per dollar invested in course of action X is 1011. Hence course of action X is 1010 times as cost-effective as the most cost-effective health interventions in the developing world.

Person 2: I reject model A; I think that appropriate probabilities involved in the Fermi calculation may be much smaller than model A claims; I think that model A fails to incorporate many relevant hypotheticals which would drag the probability down still further.

Person 1: Sure, it may be that model A is totally wrong, but there's nothing obviously very wrong with it. Surely you'd assign at least a 10-5 chance that it's on the mark? More confidence than this would seem to indicate overconfidence bias, after all, plenty of smart people believe in model A and it can't be that likely that they're all wrong. So you think that the side-effects of pursuing course of action X are systematically negative, even your own implicit model gives a figure of at least 105 $/DALY saved, and that's a far better investment than any other philanthropic effort that you know of, so you should fund course of action X even if you think that model A is probably wrong.

(*) As Jonathan Graehl mentions, DALY stands for Disability-adjusted life year.

continue reading »

Generalizing From One Example & Evolutionary Game Theory

4 multifoliaterose 31 May 2011 11:23PM

Back in April 2010 Robin Hanson wrote a post titled Homo Hypocritus Signals. Hal Finney wrote a comment

This reasoning does offer an explanation for why big brains might have evolved, to help walk the line between acceptable and unacceptable behavior. Still it seems like the basic puzzle remains: why is this hypocrisy unconscious? Why do our conscious minds remain unaware of our subconscious signaling?

to which Vladimir M responded. This post is a short addendum to the discussion.

In Generalizing From One Example Yvain wrote

There's some evidence that the usual method of interacting with people involves something sorta like emulating them within our own brain. We think about how we would react, adjust for the other person's differences, and then assume the other person would react that way.

It's plausible that the evolutionary pathway to developing an internal model of other people's minds involved bootstrapping from one's awareness of one's own mind. This would work well to the extent that there was psychological unity of humankind. In our evolutionary environment, people who interact with each other were more similar to one another than they are today.

I don't understand many of the decision theory posts on Less Wrong, but my impression is that the settings in which one is better off with timeless decision theory or updateless decision theory than with casual decision theory are situations in which the other agents have a good model of the one's own internal wiring.

This together with one's model of others being based on one's model of one's own mind and the psychological unity of humankind would push in the direction of the conscious mind adapting to something like timeless/updateless decision theory; based around cooperating with others. But the unconscious mind would then have free reign to push in the direction of defection (say, in one-shot prisoners' dilemma situations) because others would not have conscious access toward their own tendency toward defection and consequently would not properly emulate this tendency toward defection in their model of the other person.

The analysis given here is overly simplistic; for example quoting myself

The conscious vs. unconscious division is not binary but gradualist. There are aspects of one's thinking that one is very aware of, aspects that one is somewhat aware of, aspects that one is obliquely aware of, aspects that one could be aware of if one was willing to pay attention to them, and aspects that one has no access to.

but it rings true to me in some measure.

[Reference request] Article by scientist giving lower and upper bounds on the probability of superintelligence

2 multifoliaterose 08 May 2011 10:16PM

A few months back somebody posted an article by a scientist giving lower and upper bounds on the probability of superintelligence. He broke up the calculation as a Fermi calculation with three parts (EDIT: See LocustBeanGum's answer). Does anybody remember this article and if so can you provide a link?

John Baez Interviews with Eliezer (Parts 2 and 3)

7 multifoliaterose 29 March 2011 05:36PM

John Baez's This Week's Finds (Week 311) [Part 1; added for convenience following Nancy Lebovitz's comment]

John Baez's This Week's Finds (Week 312)

John Baez's This Week's Finds (Week 313)

I really like Eliezer's response to John Baez's last question in Week 313 about environmentalism vs. AI risks. I think it satisfactorily deflects much of the concern that I had when I wrote The Importance of Self-Doubt.

Eliezer says

Anyway: In terms of expected utility maximization, even large probabilities of jumping the interval between a universe-history in which 95% of existing biological species survive Earth’s 21st century, versus a universe-history where 80% of species survive, are just about impossible to trade off against tiny probabilities of jumping the interval between interesting universe-histories, versus boring ones where intelligent life goes extinct, or the wrong sort of AI self-improves.

This is true as stated but ignores an important issue which is there is feedback between more mundane current events and the eventual potential extinction of the humane race. For example, the United States' involvement in Libya has a (small) influence on  existential risk (I don't have an opinion as to what sort). Any impact on human society impact due to global warming has some influence on existential risk.

Eliezer's points about comparative advantage and of existential risk in principle dominating all other considerations are valid, important, and well-made, but passing from principle to practice is very murky in the complex human world that we live in.

Note also the points that I make in Friendly AI Research and Taskification.

View more: Next