Douglas_Knight comments on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (249)
This seems so vague and abstract.
Let me suggest a concrete example: the existential risk of asteroid impacts. It is pretty easy to estimate the distribution of time till the next impact big enough to kill all humans. Astronomy is pretty well understood, so it is pretty easy to estimate the cost of searching the sky for dangerous objects. If you imagine this as an ongoing project, there is the problem of building lasting organizations. In the unlikely event that you find an object that will strike in a year, or in 30, there is the more difficult problem of estimating the chance it will be dealt with.
It would be good to see your take on this example, partly to clarify this article and partly to isolate some objections from others.
This was, in fact, the first example I ever brought Holden. IMHO he never really engaged with it, but he did find it interesting and maintained correspondence which brought him to the FAI point. (all this long before I was formally involved with SIAI)
Sort of. The possibility of mirror matter objects makes this pretty difficult. There's even a reasonable-if-implausible paper arguing that a mirror object caused the Tunguska event, and many other allegedly anomalous impacts over the last century. There's a lot of astronomical reasons to take this idea seriously, e.g. IIRC three times too many moon craters. There are quite a few solid-looking academic papers on the subject, though a lot of them are by a single guy, Foot. My refined impression was p=.05 for mirror matter existing in a way that's decision theoretically significant (e.g. mirror meteors), lower than my original impression because mirror matter in general has weirdly little academic interest. But so do a lot of interesting things.
By "mirror matter", I assume you mean what is more commonly known as "anti-matter"?
No, mirror matter, what you get if parity isn't actually broken: http://scholar.google.com/scholar?hl=en&q=mirror+matter&btnG=Search&as_sdt=0%2C5&as_ylo=&as_vis=0 http://en.wikipedia.org/wiki/Mirror_matter
Huh. Glad I asked.
My initial impression is that the low interaction rate with ordinary matter would make me think this would not be a good explanation for anomalous impacts. But I obviously haven't examined this in anywhere near enough detail.
See elsewhere in the thread. E.g. http://arxiv.org/abs/hep-ph/0107132
I did see those replies. Thanks.
Yes, you should compute the danger multiple ways, counting asteroids, craters, and extinction events. If there are 3x too many craters, then it may be that 2/3 of impacts are caused by objects that we can't detect. Giving up on solving the whole or even most of the problem may sound bad, but it just reduces the expected value by a factor of 3, which is pretty small in this context.
Reading the Wikipedia article, I don't really see how mirror matter would be dangerous. It describes them as being about as dangerous as neutrinos or something:
Read the papers, Wikipedia is Wikipedia. Kinetic mixing can be strong. The paper on Tunguska is really quite explanatory. (Sorry, I don't mean to be brusque, I'm just allergic to LW at the moment.) ETA: http://arxiv.org/abs/astro-ph/0309330 is the most cited one I think. ETA2 (after gwern replied): Most cited paper about mirror matter implications, not about Tunguska. See here for Tunguska: http://arxiv.org/abs/hep-ph/0107132
The part on Tunguska doesn't really explain it though, but simply assumes a mirror matter object could do that and then spends more time on how the mirror matter explains the lack of observed fragments and how remaining mirror matter could be detected. The one relevant line seems to be
It must be explained elsewhere or the implications of 'ǫ ∼ 10−8 − 10−9' be obvious to a physicist. How annoying...
Here you go: http://arxiv.org/abs/hep-ph/0107132
OK, I think that explains that - Wikipedia is making the first assumption identified below, rather than the other one that he prefers:
No, Wikipedia mentions kinetic mixing then says that if it exists it must be weak, Wikipeda doesn't say it wouldn't exist (the evidence suggests it would exist). The Wikipedia article is just wrong. (ETA: I mean, it is just wrong to assume that it's weak.) (Unless I'm misinterpreting what you mean by "the first assumption identified below"?)
What I meant was that both the paper and Wikipedia regard kinetic mixing as weak and relatively unimportant; then they differ about the next effect, the one that would be strong and would matter to Tunguska.
I studied particle physics for a couple of decades, and I would not worry much about "mirror matter objects". Mirror matter is just of many possibilities that physicists have dreamt up: there's no good evidence that it exists. Yes, maybe every known particle has an unseen "mirror partner" that only interacts gravitationally with the stuff we see. Should we worry about this? If so, we should also worry about CERN creating black holes or strangelets - more theoretical possibilities not backed up by any good evidence. True, mirror matter is one of many speculative hypotheses that people have invoked to explain some peculiarities of the Tunguska event, but I'd say a comet was a lot more plausible.
Asteroid collisions, on the other hand, are known to have happened and to have caused devastating effects. NASA currently rates the chances of the asteroid Apophis colliding with the Earth in 2036 at 4.3 out of a million. They estimate that the energy of such a collision would be comparable with a 510-megatonne thermonuclear bomb. This is ten times larger than the largest bomb actually exploded, the Tsar Bomba. The Tsar Bomba, in turn, was ten times larger than all the explosives used in World War II.
On the bright side, even if it hits us, Apophis will probably just cause local damage. The asteroid that hit the Earth in Chicxulub and killed off the dinosaurs released an energy comparable to a 240,000-megatonne bomb. That's the kind of thing that really ruins everyone's day.
Mirror matter is indeed very speculative, but surely not less than 4.3 out of a million speculative, no? Mirror matter is significantly more worrisome than Apophis. I have no idea whether it's more or less worrisome than the entire set of normal-matter Apophis-like risks; does anyone have a link to a good (non-alarmist) analysis of impact risks for the next century? Snippets of Global Catastrophic Risks seem to indicate that they're not a big concern relatively speaking.
ETA: lgkglgjag anthropics messes up everything
Reality check: mirror matter has a gravitational signature - so we know some 99% of non-stellar matter in the solar system is not mirror matter - or we would see its grav-sig. So: we can ignore it with only a minor error.
Dark matter.
There evidently aren't many "clumps" of that in the solar system - so we don't have to worry very much about hypothetical collisions with it.
Sure, so according to the Bayesian adjustment framework described in the article, in principle the thing to do would be to create a 95% confidence interval as to the impact of an asteroid strike prevention effort, use this to obtain a variance attached to the distribution of impact associated with the asteroid strike prevention effort, and then Bayesian regress. As you comment, some of the numbers going into the cost-effectiveness calculation are tightly constrained in value account of being well understood and others are not. The bulk of the variance would come from the numbers which are not tightly constrained.
But as Holden suggests in the final section of the post titled "Generalizing the Bayesian approach" probably a purely formal analysis should be augmented by heuristics.
Saying "Yes, I can apply this framework to concrete examples," does not actually make anything more concrete.
Did Holden ever do the calculation or endorse someone else's calculation? What heuristic did he use to reject the calculation? "Never pursue a small chance of a large effect"? "Weird charities don't work"?
If you calculate that this is ineffective or use heuristics to reject the calculation, I'd like to see this explicitly. Which heuristics?
Which calculation are you referring to? In order to do a calculation one needs to have in mind a specific intervention, not just "asteroid risk prevention" as a cause.
Before worrying about specific interventions, you can compute an idealized version as in, say, the Copenhagen Consensus. There are existing asteroid detection programs. I don't know if any of them take donations, but this does allow assessments of realistic organizations. At some level of cost-effectiveness, you have to consider other interventions, like starting your own organization or promoting the cause. Not having a list of interventions is no excuse for not computing the value of intervening.
I would guess that it's fairly straightforward to compute the cost-effectiveness of an asteroid strike reduction program to within an order of magnitude in either direction.
The situation becomes much more complicated with assessing the cost-effectiveness of something like a "Friendly AI program" where the relevant issues are so much more murky than the issues relevant to asteroid strike prevention.
GiveWell is funded by a committed base of donors. It's not clear to me that these donors are sufficiently interested in x-risk reduction so that they would fund GiveWell if GiveWell were to focus on finding x-risk reduction charities.
I think that it's sensible for GiveWell to have started by investigating the cause of international health. This has allowed them to gain experience, credibility and empirical feedback which has strengthened the organization.
Despite the above three points I share your feeling that at present it would be desirable for GiveWell to put more time into studying x-risks and x-risk reduction charities. I think that they're now sufficiently established so that at the margin they could do more x-risk related research while simultaneously satisfying their existing constituents.
Concerning the issue of asteroid strike risk in particular, it presently looks to me as though there are likely x-risk reduction efforts which are more cost effective; largely because it seems as though people are already taking care of the asteroid strike issue. See Hellman's article on nuclear risk and this article from Pan-STARRS (HT wallowinmaya). I'm currently investigating the issue of x-risk precipitated by nuclear war & what organizations are working on nuclear nonproliferation.
Sure, but my comment is not about what GiveWell or anyone should do in general, but in the context of this article: Holden is engaging with x-risk and trying to clarify disagreement, so let's not worry if or when he should (and he has made many other comments about it over the years). I think it would be better to do so concretely, rather than claiming that vague abstract principles lead to unspecified disagreements with unnamed people. I think he would better convey the principles by applying them. I'm not asking for 300 hours of asteroid research, just as much time as it took to write this article. I could be wrong, but I think a very sloppy treatment of asteroids would be useful.
The article has relevance to thinking about effective philanthropy independently of whether one is considering x-risk reduction charities. I doubt that it was written exclusively with x-risk in mind
I can't speak for Holden here but I would guess that to the extent that he wrote the article with x-risk in mind, he did so to present a detailed account of an important relevant point which he can refer to in the future so as to streamline subsequent discussions without sacrificing detail and clarity.
So he could have written a concrete account of the disagreement with Deworm the World. The only concrete section was on BeerAdvocate and that was the only useful section. Pointing to the other sections in the future is a sacrifice of detail and clarity.