For so long as I can remember, I have rejected Pascal's Wager in all its forms on sheerly practical grounds: anyone who tries to plan out their life by chasing a 1 in 10,000 chance of a huge payoff is almost certainly doomed in practice. This kind of clever reasoning never pays off in real life...
...unless you have also underestimated the allegedly tiny chance of the large impact.
For example. At one critical junction in history, Leo Szilard, the first physicist to see the possibility of fission chain reactions and hence practical nuclear weapons, was trying to persuade Enrico Fermi to take the issue seriously, in the company of a more prestigious friend, Isidor Rabi:
I said to him: "Did you talk to Fermi?" Rabi said, "Yes, I did." I said, "What did Fermi say?" Rabi said, "Fermi said 'Nuts!'" So I said, "Why did he say 'Nuts!'?" and Rabi said, "Well, I don't know, but he is in and we can ask him." So we went over to Fermi's office, and Rabi said to Fermi, "Look, Fermi, I told you what Szilard thought and you said ‘Nuts!' and Szilard wants to know why you said ‘Nuts!'" So Fermi said, "Well… there is the remote possibility that neutrons may be emitted in the fission of uranium and then of course perhaps a chain reaction can be made." Rabi said, "What do you mean by ‘remote possibility'?" and Fermi said, "Well, ten per cent." Rabi said, "Ten per cent is not a remote possibility if it means that we may die of it. If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it's ten percent, I get excited about it." (Quoted in 'The Making of the Atomic Bomb' by Richard Rhodes.)
This might look at first like a successful application of "multiplying a low probability by a high impact", but I would reject that this was really going on. Where the heck did Fermi get that 10% figure for his 'remote possibility', especially considering that fission chain reactions did in fact turn out to be possible? If some sort of reasoning had told us that a fission chain reaction was improbable, then after it turned out to be reality, good procedure would have us go back and check our reasoning to see what went wrong, and figure out how to adjust our way of thinking so as to not make the same mistake again. So far as I know, there was no physical reason whatsoever to think a fission chain reaction was only a ten percent probability. They had not been demonstrated experimentally, to be sure; but they were still the default projection from what was already known. If you'd been told in the 1930s that fission chain reactions were impossible, you would've been told something that implied new physical facts unknown to current science (and indeed, no such facts existed). After reading enough historical instances of famous scientists dismissing things as impossible when there was no physical logic to say that it was even improbable, one cynically suspects that some prestigious scientists perhaps came to conceive of themselves as senior people who ought to be skeptical about things, and that Fermi was just reacting emotionally. The lesson I draw from this historical case is not that it's a good idea to go around multiplying ten percent probabilities by large impacts, but that Fermi should not have pulled out a number as low as ten percent.
Having seen enough conversations involving made-up probabilities to become cynical, I also strongly suspect that if Fermi had foreseen how Rabi would reply, Fermi would've said "One percent". If Fermi had expected Rabi to say "One percent is not small if..." then Fermi would've said "One in ten thousand" or "Too small to consider" - whatever he thought would get him off the hook. Perhaps I am being too unkind to Fermi, who was a famously great estimator; Fermi may well have performed some sort of lawful probability estimate on the spot. But Fermi is also the one who said that nuclear energy was fifty years off in the unlikely event it could be done at all, two years (IIRC) before Fermi himself oversaw the construction of the first nuclear pile. Where did Fermi get that fifty-year number from? This sort of thing does make me more likely to believe that Fermi, in playing the role of the solemn doubter, was just Making Things Up; and this is no less a sin when you make up skeptical things. And if this cynicism is right, then we cannot learn the lesson that it is wise to multiply small probabilities by large impacts because this is what saved Fermi - if Fermi had known the rule, if he had seen it coming, he would have just Made Up an even smaller probability to get himself off the hook. It would have been so very easy and convenient to say, "One in ten thousand, there's no experimental proof and most ideas like that are wrong! Think of all the conjunctive probabilities that have to be true before we actually get nuclear weapons and our own efforts actually made a difference in that!" followed shortly by "But it's not practical to be worried about such tiny probabilities!" Or maybe Fermi would've known better, but even so I have never been a fan of trying to have two mistakes cancel each other out.
I mention all this because it is dangerous to be half a rationalist, and only stop making one of the two mistakes. If you are going to reject impractical 'clever arguments' that would never work in real life, and henceforth not try to multiply tiny probabilities by huge payoffs, then you had also better reject all the clever arguments that would've led Fermi or Szilard to assign probabilities much smaller than ten percent. (Listing out a group of conjunctive probabilities leading up to taking an important action, and not listing any disjunctive probabilities, is one widely popular way of driving down the apparent probability of just about anything.) Or if you would've tried to put fission chain reactions into a reference class of 'amazing new energy sources' and then assigned it a tiny probability, or put Szilard into the reference class of 'people who think the fate of the world depends on them', or pontificated about the lack of any positive experimental evidence proving that a chain reaction was possible, blah blah blah etcetera - then your error here can perhaps be compensated for by the opposite error of then trying to multiply the resulting tiny probability by a large impact. I don't like making clever mistakes that cancel each other out - I consider that idea to also be clever - but making clever mistakes that don't cancel out is worse.
On the other hand, if you want a general heuristic that could've led Fermi to do better, I would suggest reasoning that previous-historical experimental proof of a chain reaction would not be strongly be expected even in worlds where it was possible, and that to discover a chain reaction to be impossible would imply learning some new fact of physical science which was not already known. And this is not just 20-20 hindsight; Szilard and Rabi saw the logic in advance of the fact, not just afterward - though not in those exact terms; they just saw the physical logic, and then didn't adjust it downward for 'absurdity' or with more complicated rationalizations. But then if you are going to take this sort of reasoning at face value, without adjusting it downward, then it's probably not a good idea to panic every time you assign a 0.01% probability to something big - you'll probably run into dozens of things like that, at least, and panicking over them would leave no room to wait until you found something whose face-value probability was large.
I don't believe in multiplying tiny probabilities by huge impacts. But I also believe that Fermi could have done better than saying ten percent, and that it wasn't just random luck mixed with overconfidence that led Szilard and Rabi to assign higher probabilities than that. Or to name a modern issue which is still open, Michael Shermer should not have dismissed the possibility of molecular nanotechnology, and Eric Drexler will not have been randomly lucky when it turns out to work: taking current physical models at face value imply that molecular nanotechnology ought to work, and if it doesn't work we've learned some new fact unknown to present physics, etcetera. Taking the physical logic at face value is fine, and there's no need to adjust it downward for any particular reason; if you say that Eric Drexler should 'adjust' this probability downward for whatever reason, then I think you're giving him rules that predictably give him the wrong answer. Sometimes surface appearances are misleading, but most of the time they're not.
A key test I apply to any supposed rule of reasoning about high-impact scenarios is, "Does this rule screw over the planet if Reality actually hands us a high-impact scenario?" and if the answer is yes, I discard it and move on. The point of rationality is to figure out which world we actually live in and adapt accordingly, not to rule out certain sorts of worlds in advance.
There's a doubly-clever form of the argument wherein everyone in a plausibly high-impact position modestly attributes only a tiny potential possibility that their face-value view of the world is sane, and then they multiply this tiny probability by the large impact, and so they act anyway and on average worlds in trouble are saved. I don't think this works in real life - I don't think I would have wanted Leo Szilard to think like that. I think that if your brain really actually thinks that fission chain reactions have only a tiny probability of being important, you will go off and try to invent better refrigerators or something else that might make you money. And if your brain does not really feel that fission chain reactions have a tiny probability, then your beliefs and aliefs are out of sync and that is not something I want to see in people trying to handle the delicate issue of nuclear weapons. But in any case, I deny the original premise: I do not think the world's niches for heroism must be populated by heroes who are incapable in principle of reasonably distinguishing themselves from a population of crackpots, all of whom have no choice but to continue on the tiny off-chance that they are not crackpots.
I haven't written enough about what I've begun thinking of as 'heroic epistemology' - why, how can you possibly be so overconfident as to dare even try to have a huge positive impact when most people in that reference class blah blah blah - but on reflection, it seems to me that an awful lot of my answer boils down to not trying to be clever about it. I don't multiply tiny probabilities by huge impacts. I also don't get tiny probabilities by putting myself into inescapable reference classes, for this is the sort of reasoning that would screw over planets that actually were in trouble if everyone thought like that. In the course of any workday, on the now very rare occasions I find myself thinking about such meta-level junk instead of the math at hand, I remind myself that it is a wasted motion - where a 'wasted motion' is any thought which will, in retrospect if the problem is in fact solved, not have contributed to having solved the problem. If someday Friendly AI is built, will it have been terribly important that someone have spent a month fretting about what reference class they're in? No. Will it, in retrospect, have been an important step along the pathway to understanding stable self-modification, if we spend time trying to solve the Lobian obstacle? Possibly. So one of these cognitive avenues is predictably a wasted motion in retrospect, and one of them is not. The same would hold if I spent a lot of time trying to convince myself that I was allowed to believe that I could affect anything large, or any other form of angsting about meta. It is predictable that in retrospect I will think this was a waste of time compared to working on a trust criterion between a probability distribution and an improved probability distribution. (Apologies, this is a technical thingy I'm currently working on which has no good English description.)
But if you must apply clever adjustments to things, then for Belldandy's sake don't be one-sidedly clever and have all your cleverness be on the side of arguments for inaction. I think you're better off without all the complicated fretting - but you're definitely not better off eliminating only half of it.
And finally, I once again state that I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead. You cannot justifiably trade off tiny probabilities of x-risk improvement against efforts that do not effectuate a happy intergalactic civilization, but there is nonetheless no need to go on tracking tiny probabilities when you'd expect there to be medium-sized probabilities of x-risk reduction. Nonetheless I try to avoid coming up with clever reasons to do stupid things, and one example of a stupid thing would be not working on Friendly AI when it's in blatant need of work. Elaborate complicated reasoning which says we should let the Friendly AI issue just stay on fire and burn merrily away, well, any complicated reasoning which returns an output this silly is automatically suspect.
If, however, you are unlucky enough to have been cleverly argued into obeying rules that make it a priori unreachable-in-practice for anyone to end up in an epistemic state where they try to do something about a planet which appears to be on fire - so that there are no more plausible x-risk reduction efforts to fall back on, because you're adjusting all the high-impact probabilities downward from what the surface state of the world suggests...
Well, that would only be a good idea if Reality were not allowed to hand you a planet that was in fact on fire. Or if, given a planet on fire, Reality was prohibited from handing you a chance to put it out. There is no reason to think that Reality must a priori obey such a constraint.
EDIT: To clarify, "Don't multiply tiny probabilities by large impacts" is something that I apply to large-scale projects and lines of historical probability. On a very large scale, if you think FAI stands a serious chance of saving the world, then humanity should dump a bunch of effort into it, and if nobody's dumping effort into it then you should dump more effort than currently into it. On a smaller scale, to compare two x-risk mitigation projects in demand of money, you need to estimate something about marginal impacts of the next added effort (where the common currency of utilons should probably not be lives saved, but "probability of an ok outcome", i.e., the probability of ending up with a happy intergalactic civilization). In this case the average marginal added dollar can only account for a very tiny slice of probability, but this is not Pascal's Wager. Large efforts with a success-or-failure criterion are rightly, justly, and unavoidably going to end up with small marginally increased probabilities of success per added small unit of effort. It would only be Pascal's Wager if the whole route-to-an-OK-outcome were assigned a tiny probability, and then a large payoff used to shut down further discussion of whether the next unit of effort should go there or to a different x-risk.
From your reference:
(Even if he had an elegant technical argument, doesn't mean he would be right. Heisenberg had a short elegant argument for why the uranium critical mass would be 1 ton, but it was actually more like 10 pounds.)
For others' reference: this begins on page 280 of The Making of the Atomic Bomb.