In summary, you could say that I'm in this field less because of what you could do with a quantum computer, than because of what the possibility of quantum computers already does to our conception of the world. Either practical quantum computers can be built, and the limits of the knowable are not what we thought they are; or they can't be built, and the principles of quantum mechanics themselves need revision; or there's a yet-undreamt method to simulate quantum mechanics efficiently using a conventional computer. All three of these possibilities sound like crackpot speculations, but at least one of them is right!
Ideally, you put yourself in a scenario where verifying any possibility has a huge payoff.
Are you classifying 10% as a Pascal-level probability? How big does a probability have to get before you don't think Pascal-type considerations apply to it?
Are you suggesting that if there was (for example) a ten percent probability of an asteroid hitting the Earth in 2025, we should devote fewer resources to asteroid prediction/deflection than simple expected utility calculations would predict?
I don't think it counts as "Pascalian" until it starts to scrape below the threshold of probabilities you can meaningfully assert about propositions. If we were basically assured of a bright astronomical future so long as person X doesn't win the lottery, I wouldn't say that worrying that X might win the lottery was a Pascalian risk.
I'm usually fine with dropping a one-time probability of 0.1% from my calculations. 10% is much too high to drop from a major strategic calculation but even so I'd be uncomfortable building my life around one. If this was a very well-defined number as in the asteroid calculation then it would be more tempting to build a big reference class of risks like that one and work on stopping them collectively. If an asteroid were genuinely en route, large enough to wipe out humanity, possibly stoppable, and nobody was doing anything about this 10% probability, I would still be working on FAI but I would be screaming pretty loudly about the asteroid on the side. If the asteroid is just going to wipe out a country, I'll make sure I'm not in that country and then keep working on x-risk.
What probability are you assigning to cryonics working that makes you think it's a good idea? I was under the impression that the standard LW argument for signing up was (tiny probability of success)*(monumental heap of utility if it works)=(a good investment). If that's not your argument, what is?
I was under the impression that the standard LW argument for signing up was (tiny probability of success)*(monumental heap of utility if it works)=(a good investment). If that's not your argument, what is?
The standard LW argument is that cryonics has a non-tiny probability of success. I did my own estimate, and roughly speaking, P(success) is at least P(A)P(B|A)P(C|A,B)P(D|A,B,C), where
And my honest estimates were, roughly, P(A) > .95, P(B|A) > .8, P(C|A,B) > .3, and P(D|A,B,C) > .2, giving an overall lower-bound estimate of about 5% (with a lot of metauncertainty, obviously); then I tried to estimate how much waking up in the future would really be worth to me in terms of my current values comp...
E (should go between A and B given your chronological ordering scheme): You die in such a way that high-quality vitrification/plastination is possible. (This variable gets overlooked way too frequently in these calculations).
Expanding conjunctive probabilities without expanding disjunctive probabilities is another classic form of one-sided rationality. If I wanted to make cryonics look more probable than this, I would individually list out many different things that could go right.
Do you mean as an alternate to D that, say, a new cryo provider takes over the abandoned preserved heads before they thaw?
Sure. That happened already once in history (though there was, even earlier, a loss-thaw). It's why all modern cryo organizations are very strict about demanding advance payment, despite their compassionate hearts screaming at them not to let their friends die because of mere money. Sucks to be them, but they've got no choice.
Or as an alternate to C, that even though the cost is high, they go ahead and do it anyway?
Yep. I'd think FAI scenarios would tend to yield that.
Basically I always sigh sadly when somebody's discussing a future possibility and they throw up some random conjunction of conditional probabilities, many steps of which are actually pretty darned high when I look at them, with no corresponding disjunctions listed. This is the sort of thinking that would've led Fermi to cleverly assign a probability way lower than 10% to having an impact, by the time he was done expanding all the clever steps of the form "And then we can actually persuade the military to pay attention to us..." If you're going to be silly about driving down all impact probabilities to something small via this sort of conjunctive cleverness, you'd better also be silly and multiply the resulting small probability by a large payoff, so you won't actually ignore all possible important issues.
My understanding of neurobiology (BS in biology, current Plant Biology grad student) leads me to believe that the mind is not stored strictly statically in relationships between neurons, but also in the subcellular states of several proteins. These states are unlikely to be preserved in time for cryopreservation. They probably will be disrupted by the freezing process even if a living brain were to be preserved.
I need to write my "You appear to be making an argument against the technical feasability of cryonics as a comment on a blog post" blog post. I've already blogged all the pieces, but I need to write the one piece that ties it all together.
You can rephrase it as a small probability of revival vs a small probability of REALLY needing that money.
I suspect he's getting downvoted because he didn't answer the question, not even with "I don't think it has a low probability of success" or some other simple response.
But Fermi is also the one who said that nuclear energy was fifty years off in the unlikely event it could be done at all, two years (IIRC) before Fermi himself oversaw the construction of the first nuclear pile.
For something in the same(-ish) reference class where the pessimists turned out to be right, commercially viable power generation from nuclear fusion has been “30 years in the future” ever since the mid-20th century.
Everyone tells this story; I'd like to see a cite. Fusion advocates tell a different story: that fusion was always some large number of dollars away, but the dollars weren't there until relatively recently. Once the dollars arrived, a roadmap was set out and has AFAICT basically hit all its deadlines, with JET, ITER and next DEMO proceeding as planned.
Fusion advocates tell a different story: that fusion was always some large number of dollars away, but the dollars weren't there until relatively recently.
Could you link to them?
I didn't keep links when I read these things, so this is the result of a quick Google search for 'fusion "years away" "dollars away"':
The actual reason is mainly funding. People always use the "twenty/thirty/fifty years away" comment as an insult, a way of showing how fusion (or science in general) is unreliable. The reality is that when those predictions were first made in the 1970s in the wake of the Oil Crisis. What happened during the Oil Crisis? We freaked out (rightly so) and planned to allocate a huge amount of money towards fusion research. What happened after the Oil Crisis ended? That money disappeared. Essentially, scientists were promised X billions of dollars to make fusion work, and said they could do it in a couple decades. Then that money was taken away, and people expected them to stay on schedule. Of course, fusion power turned out to be a lot more complicated than we expected. But the real reason is we simply aren't paying for it. Its not "30 years away" its more like $80 billion dollars away. http://imgur.com/sjH5r
It is predictable that in retrospect I will think this was a waste of time compared to working on a trust criterion between a probability distribution and an improved probability distribution. (Apologies, this is a technical thingy I'm currently working on which has no good English description.)
Cool. Are you or one of your minions likely to write it up in an informal technical way at some point in the not-excessively-distant future?
In the course of any workday, on the now very rare occasions I find myself thinking about such meta-level junk instead of the math at hand, I remind myself that it is a wasted motion - where a 'wasted motion' is any thought which will, in retrospect if the problem is in fact solved, have not have contributed to having solved the problem.
If you rule out doing anything except X, then you won't get much out of accurately evaluating the plausibility of X. The point of considering likelihood of success is that there are always other options, including cutting one's losses. But to rule out all competing options requires some assessment of their plausibility relative to X.
A lot of meta-level fretting has the property of being one-sided - it's about a single option considered in isolation, not about two alternatives. If there's a concrete alternative that's supposed to help humanity more and has a decent chance of being actually correct vs. the sort of thing one dutifully ought to consider, I am usually totally happy to consider it. (You've seen me ask 'Can we have a concrete policy implication, please?' or 'Is there an option on the table for what we should be doing instead, if that's true?' at a number of discussions, right? This is often what my 'wasted motion' heuristic looks like when it fires.)
Quoted in 'The Making of the Atomic Bomb' by Richard Rhodes
Specifically, on page 280 of the 25th Anniversary Edition of the book.
Since I just posted to announce a meetup featuring Michael Vassar, I suppose I was primed to recall his take on the Fermi episode:
......1 in 10 is not such a bad estimate. The problem was not that Fermi was stupid or that he was bad at making estimates; he was probably much better at making estimates than almost everyone. The problem is that he was adhering to a set of rules for what you should be thinking about or talking about that is flat-out insane, frankly. A set of rules that says you shouldn't think about anything until you're ready to do experiments
Because ordinary matter is stable, and the Earth (and, for more anthropically stable evidence, the other planets) hadn't gone up in a nuclear chain reaction already?
Without using hindsight, one might presume that a universe in which nuclear chain reactions were possible would be one in which it happened to ordinary matter under normal conditions, or else only to totally unstable elements, not one in which it barely worked in highly concentrated forms of particular not-very-radioactive isotopes. This also explains his presumption that even if it worked, it would be highly impractical: given the orders of magnitude of uncertainty, it seemed like "chain reactions don't naturally occur but they're possible to engineer on practical scales" is represented by a narrow band of the possible parameters.
I admit that I don't know what evidence Fermi did and didn't have at the time, but I'd be surprised if Szilard's conclusions were as straightforward an implication of current knowledge as nanotech seems to be of today's current knowledge.
Strictly speaking, chain reactions do naturally occur, they're just so rare that we never found one until decades after we knew exactly what we were looking for, so Fermi certainly didn't have that evidence available.
Also, although I like your argument... wouldn't it apply as well to fire as it does to fission? In fact we do have a world filled with material that doesn't burn, material that oxidizes so rapidly that we never see the unoxidized chemical in nature, and material that burns only when concentrated enough to make an ignition self-sustaining. If forests and grasslands were as rare as uranium, would we have been justified in asserting that wildfires are likely impossible?
One reason why neither your argument nor my analogy turned out to be correct: even if one material is out of a narrow band of possible parameters, there are many other materials that could be in it. If our atmosphere was low-oxygen enough to make wood noncombustable, we might see more plants safely accumulating more volatile tissues instead. If other laws of physics made uranium too stable to use in technology, perhaps in that universe fermium would no longer be too unstable to survive in nature.
Consider also the nature of the first heap: Purified uranium and a graphite moderator in such large quantities that the neutron multiplication factor was driven just over one. Elements which were less stable than uranium decayed earlier in Earth's history; elements more stable than this would not be suitable for fission. But the heap produced plutonium by its internal reactions, which could be purified chemically and then fizzed. All this was a difficult condition to obtain, but predictable that human intelligence would seek out such points in possibility-space selectively and create them - that humans would create exotic intermediate conditions not existing in nature, by which the remaining sorts of materials would fizz for the first time, and that such conditions indeed might be expected to exist, because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention, with a wide space of possibilities for which elements you could try. Or to then simplify this conclusion: "Of course it wouldn't exi...
because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention
Except there aren't any that are not eliminated by, say, 10 billion years. And even 40 million years eliminate everything you can make a nuke out of except U235 . This is because besides fizzling, unstable nuclei undergo this highly asymmetric spontaneous fission known as alpha decay.
I spot two holes.
First the elephant in the living room: The sun.
Matter usually ends up as a fusion powered, flaming hell. (If you look really closely it is not all like that; there are scattered little lumps in orbit, such as the Earth and Mars)
Second, a world view with a free parameter, adjusted to explain away vulcanism.
Before the discovery of radio-activity, the source of the Earth's internal heat was a puzzle. Kelvin had calculated that the heat from Earth's gravitational collapse, from dispersed matter to planet, was no where near enough to keep the Earth's internal fires going for the timescales which geologists were arguing for.
Enter radioactivity. But nobody actually knows the internal composition the Earth. The amount of radioactive material is a free parameter. You know how much heat you need and you infer the amount of Thorium and Uranium that "must" be there. If there is extra heat due to chain reactions you just revise the estimate downwards to suit.
Sticking to the theme of being less wrong, how does one see the elephant in the room? How does one avoid missing the existence of spontaneous nuclear fusion on a sunny day? Pass.
The vulcanism point is more promisi...
A clever argument!
I'm correcting a potential factual error:
They had not been demonstrated experimentally, to be sure; but they were still the default projection from what was already known.
What I am guessing happened (you're welcome to research the topic), first you can learn that uranium can be fissioned by neutrons (which you make, if I recall correctly, by irradiating lithium with alpha particles). Then, you may learn that fission produces neutrons, because, it so happens that you don't just see all of that in microscope, you see particle tracks in photographic emulsion or a cloud chamber or the like, and neutrons, being neutral, are hard to detect. (edit: And this is how I read the quote, anyway, on the first reading. I just parse it as low probability of neutrons, high probability of chain reaction if there's enough neutrons.)
So at first you do not know if fission produces neutrons without very precise and difficult analysis of the conservation of momentum or a big enough experiment to actually be able to count them, or something likewise clever and subtle. To think about it, chronologically, you may happen to first acquire weak evidence that fission does not produce p...
Can someone point to MIRI's estimates (with justifications) of various x-risks and the odds of mitigating them? Just wondering how, in MIRI's view, the FAI work stacks up against other disaster prevention efforts. I can't seem to find this information on their site.
Enrico Fermi said:
Well… there is the remote possibility that neutrons may be emitted in the fission of uranium
and then of course perhaps a chain reaction can be made.
The way I interpret it, he gave a remote possibility to enough neutrons being emitted in the fission of uranium (I guess from the tendency of other things to happen to excess neutrons in the nuclei, such as beta decay), and high probability ("of course") to the chain reaction conditional on the above.
...I haven't written enough about what I've begun thinking of as 'heroic epistemo
From your reference:
...Fermi was not misleading Szilard. It was easy to estimate the explosive force of a quantity of uranium, as Fermi would do standing at his office window overlooking Manhattan, if fission proceeded automatically from mere assembly of the material; even journalists had managed that simple calculation. But such obviously was not the case for uranium in its natural form, or the substance would long ago have ceased to exist on earth. However energetically interesting a reaction, fission by itself was merely a laboratory curiosity. Only if it released secondary neutrons, and those in sufficient quantity to initiate and sustain a chain reaction, would it serve for anything more. "Nothing known then," writes Herbert Anderson, Fermi's young partner in experiment, "guaranteed the emission of neutrons. Neutron emission had to be observed experimentally and measured quantitatively." No such work had yet been done. It was, in fact, the new work Fermi had proposed to Anderson immediately upon returning from Washington. Which meant to Fermi that talk of developing fission into a weapon of war was absurdly premature.
Many years later Szilard succinctly summe
But there ought to be some unstable elements that hadn't fizzed by themselves in natural aggregations and purities, and many such, and these might be manipulated by humans. If something doesn't happen naturally, are you in a situation where you're likely to be learning about a randomly placed lower bound that's probably randomly far above you, or in a case where you're learning about a nearby lower bound that probably has some things right above it?
This doesn't actually work...
There's only 3 isotopes to choose from. Th232 , U238 , U235 . Evidence that fission occurs probably came from U238 being fissioned by fast neutrons (or could just as well have). You can't make a bomb out of U238 , though, because it doesn't get fissioned by slow neutrons, and neutrons slow down quite rapidly, before they fission it enough. You need a nucleus so unstable, that it fissions when it captures a neutron. It must also fission immediately - if it fissions with a delay (if the mechanism of fissioning is that it captures the neutron, transmutes into something unstable that fissions later. Because neutrons do not leave tracks you don't immediately know that this is not what is going on).
There's prec...
And finally, I once again state that I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk.
So is this a bad reason to give $100 to MIRI:
"MIRI reduces existential risks by a non-tiny probability. My contribution of $100 would increase the chance of MIRI's success, however, by only a tiny probability. Still, multiplying this tiny probability increase by the good that would occur if my $100 did end up making the difference justifies my giving $100 to MIRI."
On a very large scale, if you think FAI stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody's dumping effort into it then you should dump more effort than currently into it. Calculations of marginal impact in POKO/dollar are sensible for comparing two x-risk mitigation efforts in demand of money, but in this case each marginal added dollar is rightly going to account for a very tiny slice of probability, and this is not Pascal's Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginal probabilities per added unit effort. It would only be Pascal's Wager if the whole route-to-humanity-being-OK were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.
For so long as I can remember, I have rejected Pascal's Wager in all its forms on sheerly practical grounds: anyone who tries to plan out their life by chasing a 1 in 10,000 chance of a huge payoff is almost certainly doomed in practice.
Almost certainly doomed, yes. You might even say doomed 9,999 out of 10,000 times.
If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it's ten percent, I get excited about it.
Did “excited” mean something different back then? (If so, I may have misinterpreted a certain line in “All Along the Watchtower” by Bob Dylan.)
I retain an eight volume dictionary from 1899 to answer this kind of question. Meaning four is
To arouse the emotions of; agitate or perturb mentally; move: as, he was greatly excited by the news.
One real-life example is
The news of the fall of Calcutta reached Madras, and excited the fiercest and bitterest resentment
Today "exciting" is often contrasted with "boring" and has a positive connotation. (eg "We hoped the football game would be exciting and were disappointed when it was boring.") My old dictionary seems evenly balanced with "excited" being bad and good by turns.
Laughing at Fermi for 10% is uncharitable.
It sounds like his heuristic for deciding what avenue of research to follow rejected chain reactions. If, as Eliezer claims, >10% should have been obvious to Fermi if he really thought about it, then we can conclude that he didn't feel a need to think about it, for whatever reason.
I do wish senior/brilliant thinkers wouldn't discourage anyone based on their take of something they haven't really thought about, but that probably doesn't stop the really bold upstarts.
I'd like to understand better why really bright ...
EY: I don't multiply tiny probabilities by huge impacts. I also don't get tiny probabilities by putting myself into inescapable reference classes, for this is the sort of reasoning that would screw over planets that actually were in trouble if everyone thought like that.
But isn't the latter exactly what you are doing with Pascal's wager? Underestimating the existence of God's probability so that you may retreat back to 'tiny probability'?
Isn't Fermi the guy who insisted that a nuclear reaction could set the atmosphere on fire in a massive nuclear reaction?
I'm having trouble making sense of the quoted section. It makes a lot more sense if that's what they're talking about, especially the "if it means that we may die of it," rather than the possibility of a nuclear reaction in general.
I always wondered if Szilard's slightly outcast status (if my recollection of Rhodes book is correct) helped him see things establishment scientists ignored.
"Found"? Didn't you write that post, Dmytry? Why wouldn't you just say so?
Oh wait, you're that other person with a bunch of different monikers: metaphysicist, srdiamond, etc. Sorry.
For so long as I can remember, I have rejected Pascal's Wager in all its forms on sheerly practical grounds: anyone who tries to plan out their life by chasing a 1 in 10,000 chance of a huge payoff is almost certainly doomed in practice. This kind of clever reasoning never pays off in real life...
...unless you have also underestimated the allegedly tiny chance of the large impact.
For example. At one critical junction in history, Leo Szilard, the first physicist to see the possibility of fission chain reactions and hence practical nuclear weapons, was trying to persuade Enrico Fermi to take the issue seriously, in the company of a more prestigious friend, Isidor Rabi:
I said to him: "Did you talk to Fermi?" Rabi said, "Yes, I did." I said, "What did Fermi say?" Rabi said, "Fermi said 'Nuts!'" So I said, "Why did he say 'Nuts!'?" and Rabi said, "Well, I don't know, but he is in and we can ask him." So we went over to Fermi's office, and Rabi said to Fermi, "Look, Fermi, I told you what Szilard thought and you said ‘Nuts!' and Szilard wants to know why you said ‘Nuts!'" So Fermi said, "Well… there is the remote possibility that neutrons may be emitted in the fission of uranium and then of course perhaps a chain reaction can be made." Rabi said, "What do you mean by ‘remote possibility'?" and Fermi said, "Well, ten per cent." Rabi said, "Ten per cent is not a remote possibility if it means that we may die of it. If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it's ten percent, I get excited about it." (Quoted in 'The Making of the Atomic Bomb' by Richard Rhodes.)
This might look at first like a successful application of "multiplying a low probability by a high impact", but I would reject that this was really going on. Where the heck did Fermi get that 10% figure for his 'remote possibility', especially considering that fission chain reactions did in fact turn out to be possible? If some sort of reasoning had told us that a fission chain reaction was improbable, then after it turned out to be reality, good procedure would have us go back and check our reasoning to see what went wrong, and figure out how to adjust our way of thinking so as to not make the same mistake again. So far as I know, there was no physical reason whatsoever to think a fission chain reaction was only a ten percent probability. They had not been demonstrated experimentally, to be sure; but they were still the default projection from what was already known. If you'd been told in the 1930s that fission chain reactions were impossible, you would've been told something that implied new physical facts unknown to current science (and indeed, no such facts existed). After reading enough historical instances of famous scientists dismissing things as impossible when there was no physical logic to say that it was even improbable, one cynically suspects that some prestigious scientists perhaps came to conceive of themselves as senior people who ought to be skeptical about things, and that Fermi was just reacting emotionally. The lesson I draw from this historical case is not that it's a good idea to go around multiplying ten percent probabilities by large impacts, but that Fermi should not have pulled out a number as low as ten percent.
Having seen enough conversations involving made-up probabilities to become cynical, I also strongly suspect that if Fermi had foreseen how Rabi would reply, Fermi would've said "One percent". If Fermi had expected Rabi to say "One percent is not small if..." then Fermi would've said "One in ten thousand" or "Too small to consider" - whatever he thought would get him off the hook. Perhaps I am being too unkind to Fermi, who was a famously great estimator; Fermi may well have performed some sort of lawful probability estimate on the spot. But Fermi is also the one who said that nuclear energy was fifty years off in the unlikely event it could be done at all, two years (IIRC) before Fermi himself oversaw the construction of the first nuclear pile. Where did Fermi get that fifty-year number from? This sort of thing does make me more likely to believe that Fermi, in playing the role of the solemn doubter, was just Making Things Up; and this is no less a sin when you make up skeptical things. And if this cynicism is right, then we cannot learn the lesson that it is wise to multiply small probabilities by large impacts because this is what saved Fermi - if Fermi had known the rule, if he had seen it coming, he would have just Made Up an even smaller probability to get himself off the hook. It would have been so very easy and convenient to say, "One in ten thousand, there's no experimental proof and most ideas like that are wrong! Think of all the conjunctive probabilities that have to be true before we actually get nuclear weapons and our own efforts actually made a difference in that!" followed shortly by "But it's not practical to be worried about such tiny probabilities!" Or maybe Fermi would've known better, but even so I have never been a fan of trying to have two mistakes cancel each other out.
I mention all this because it is dangerous to be half a rationalist, and only stop making one of the two mistakes. If you are going to reject impractical 'clever arguments' that would never work in real life, and henceforth not try to multiply tiny probabilities by huge payoffs, then you had also better reject all the clever arguments that would've led Fermi or Szilard to assign probabilities much smaller than ten percent. (Listing out a group of conjunctive probabilities leading up to taking an important action, and not listing any disjunctive probabilities, is one widely popular way of driving down the apparent probability of just about anything.) Or if you would've tried to put fission chain reactions into a reference class of 'amazing new energy sources' and then assigned it a tiny probability, or put Szilard into the reference class of 'people who think the fate of the world depends on them', or pontificated about the lack of any positive experimental evidence proving that a chain reaction was possible, blah blah blah etcetera - then your error here can perhaps be compensated for by the opposite error of then trying to multiply the resulting tiny probability by a large impact. I don't like making clever mistakes that cancel each other out - I consider that idea to also be clever - but making clever mistakes that don't cancel out is worse.
On the other hand, if you want a general heuristic that could've led Fermi to do better, I would suggest reasoning that previous-historical experimental proof of a chain reaction would not be strongly be expected even in worlds where it was possible, and that to discover a chain reaction to be impossible would imply learning some new fact of physical science which was not already known. And this is not just 20-20 hindsight; Szilard and Rabi saw the logic in advance of the fact, not just afterward - though not in those exact terms; they just saw the physical logic, and then didn't adjust it downward for 'absurdity' or with more complicated rationalizations. But then if you are going to take this sort of reasoning at face value, without adjusting it downward, then it's probably not a good idea to panic every time you assign a 0.01% probability to something big - you'll probably run into dozens of things like that, at least, and panicking over them would leave no room to wait until you found something whose face-value probability was large.
I don't believe in multiplying tiny probabilities by huge impacts. But I also believe that Fermi could have done better than saying ten percent, and that it wasn't just random luck mixed with overconfidence that led Szilard and Rabi to assign higher probabilities than that. Or to name a modern issue which is still open, Michael Shermer should not have dismissed the possibility of molecular nanotechnology, and Eric Drexler will not have been randomly lucky when it turns out to work: taking current physical models at face value imply that molecular nanotechnology ought to work, and if it doesn't work we've learned some new fact unknown to present physics, etcetera. Taking the physical logic at face value is fine, and there's no need to adjust it downward for any particular reason; if you say that Eric Drexler should 'adjust' this probability downward for whatever reason, then I think you're giving him rules that predictably give him the wrong answer. Sometimes surface appearances are misleading, but most of the time they're not.
A key test I apply to any supposed rule of reasoning about high-impact scenarios is, "Does this rule screw over the planet if Reality actually hands us a high-impact scenario?" and if the answer is yes, I discard it and move on. The point of rationality is to figure out which world we actually live in and adapt accordingly, not to rule out certain sorts of worlds in advance.
There's a doubly-clever form of the argument wherein everyone in a plausibly high-impact position modestly attributes only a tiny potential possibility that their face-value view of the world is sane, and then they multiply this tiny probability by the large impact, and so they act anyway and on average worlds in trouble are saved. I don't think this works in real life - I don't think I would have wanted Leo Szilard to think like that. I think that if your brain really actually thinks that fission chain reactions have only a tiny probability of being important, you will go off and try to invent better refrigerators or something else that might make you money. And if your brain does not really feel that fission chain reactions have a tiny probability, then your beliefs and aliefs are out of sync and that is not something I want to see in people trying to handle the delicate issue of nuclear weapons. But in any case, I deny the original premise: I do not think the world's niches for heroism must be populated by heroes who are incapable in principle of reasonably distinguishing themselves from a population of crackpots, all of whom have no choice but to continue on the tiny off-chance that they are not crackpots.
I haven't written enough about what I've begun thinking of as 'heroic epistemology' - why, how can you possibly be so overconfident as to dare even try to have a huge positive impact when most people in that reference class blah blah blah - but on reflection, it seems to me that an awful lot of my answer boils down to not trying to be clever about it. I don't multiply tiny probabilities by huge impacts. I also don't get tiny probabilities by putting myself into inescapable reference classes, for this is the sort of reasoning that would screw over planets that actually were in trouble if everyone thought like that. In the course of any workday, on the now very rare occasions I find myself thinking about such meta-level junk instead of the math at hand, I remind myself that it is a wasted motion - where a 'wasted motion' is any thought which will, in retrospect if the problem is in fact solved, not have contributed to having solved the problem. If someday Friendly AI is built, will it have been terribly important that someone have spent a month fretting about what reference class they're in? No. Will it, in retrospect, have been an important step along the pathway to understanding stable self-modification, if we spend time trying to solve the Lobian obstacle? Possibly. So one of these cognitive avenues is predictably a wasted motion in retrospect, and one of them is not. The same would hold if I spent a lot of time trying to convince myself that I was allowed to believe that I could affect anything large, or any other form of angsting about meta. It is predictable that in retrospect I will think this was a waste of time compared to working on a trust criterion between a probability distribution and an improved probability distribution. (Apologies, this is a technical thingy I'm currently working on which has no good English description.)
But if you must apply clever adjustments to things, then for Belldandy's sake don't be one-sidedly clever and have all your cleverness be on the side of arguments for inaction. I think you're better off without all the complicated fretting - but you're definitely not better off eliminating only half of it.
And finally, I once again state that I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. You cannot justifiably trade off tiny probabilities of x-risk improvement against efforts that do not effectuate a happy intergalactic civilization, but there is nonetheless no need to go on tracking tiny probabilities when you'd expect there to be medium-sized probabilities of x-risk reduction. Nonetheless I try to avoid coming up with clever reasons to do stupid things, and one example of a stupid thing would be not working on Friendly AI when it's in blatant need of work. Elaborate complicated reasoning which says we should let the Friendly AI issue just stay on fire and burn merrily away, well, any complicated reasoning which returns an output this silly is automatically suspect.
If, however, you are unlucky enough to have been cleverly argued into obeying rules that make it a priori unreachable-in-practice for anyone to end up in an epistemic state where they try to do something about a planet which appears to be on fire - so that there are no more plausible x-risk reduction efforts to fall back on, because you're adjusting all the high-impact probabilities downward from what the surface state of the world suggests...
Well, that would only be a good idea if Reality were not allowed to hand you a planet that was in fact on fire. Or if, given a planet on fire, Reality was prohibited from handing you a chance to put it out. There is no reason to think that Reality must a priori obey such a constraint.
EDIT: To clarify, "Don't multiply tiny probabilities by large impacts" is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody's dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but "probability of an ok outcome", i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal's Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal's Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.