Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

komponisto comments on Being Half-Rational About Pascal's Wager is Even Worse - Less Wrong

19 Post author: Eliezer_Yudkowsky 18 April 2013 05:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (168)

You are viewing a single comment's thread.

Comment author: komponisto 18 April 2013 06:19:47AM 5 points [-]

Since I just posted to announce a meetup featuring Michael Vassar, I suppose I was primed to recall his take on the Fermi episode:

...1 in 10 is not such a bad estimate. The problem was not that Fermi was stupid or that he was bad at making estimates; he was probably much better at making estimates than almost everyone. The problem is that he was adhering to a set of rules for what you should be thinking about or talking about that is flat-out insane, frankly. A set of rules that says you shouldn't think about anything until you're ready to do experiments with more-or-less established experimental techniques.

From this perspective -- which assumes that Fermi arrived at his estimate through an honest, non-motivated calculation -- what Fermi should have done was believe his own estimate, instead of applying the heuristic of "if it's not established. experimentally-tested science, it doesn't exist". Because a 10% probability of the scenario in question is indeed approximately 100%: that is, enough to take seriously.

Comment author: Eliezer_Yudkowsky 18 April 2013 06:23:56AM 2 points [-]

Why assign a 90% probability to chain reactions being impossible or unfeasible? How should Fermi have known that, especially when it was false?

EDIT: Be careful with your arguments that Fermi should have assigned the false fact 'chain reactions are impossible' an even more extreme probability than 90%. You are training your brain to assign higher and more extreme probabilities to things that are false. You should be looking for potential heuristics that should have fired in the opposite direction. There's such a thing as overfitting, but there's also such a thing as being cleverly contrarian about reasons why nobody could possibly have figured out X and thus training your brain in the opposite direction of each example.

Comment author: private_messaging 18 April 2013 04:34:32PM *  6 points [-]

Because they didn't know if fission produced enough prompt neutrons, which is clear from the quoted passage, and probably also because Fermi has estimated that there's on the order of 10 other propositions about the results from fission which he, if presented with them by an equally enthusiastic proponent, would find comparably plausible. I'm thinking that in the alternate realities where fission does something other than producing sufficient number of neutrons (about 3 on the average), you'd assign likewise high number to them by hindsight, with a sum greater than 1 (so stop calling it probability already).

Comment author: orthonormal 18 April 2013 02:18:21PM 13 points [-]

Because ordinary matter is stable, and the Earth (and, for more anthropically stable evidence, the other planets) hadn't gone up in a nuclear chain reaction already?

Without using hindsight, one might presume that a universe in which nuclear chain reactions were possible would be one in which it happened to ordinary matter under normal conditions, or else only to totally unstable elements, not one in which it barely worked in highly concentrated forms of particular not-very-radioactive isotopes. This also explains his presumption that even if it worked, it would be highly impractical: given the orders of magnitude of uncertainty, it seemed like "chain reactions don't naturally occur but they're possible to engineer on practical scales" is represented by a narrow band of the possible parameters.

I admit that I don't know what evidence Fermi did and didn't have at the time, but I'd be surprised if Szilard's conclusions were as straightforward an implication of current knowledge as nanotech seems to be of today's current knowledge.

Comment author: roystgnr 18 April 2013 04:43:37PM 14 points [-]

Strictly speaking, chain reactions do naturally occur, they're just so rare that we never found one until decades after we knew exactly what we were looking for, so Fermi certainly didn't have that evidence available.

Also, although I like your argument... wouldn't it apply as well to fire as it does to fission? In fact we do have a world filled with material that doesn't burn, material that oxidizes so rapidly that we never see the unoxidized chemical in nature, and material that burns only when concentrated enough to make an ignition self-sustaining. If forests and grasslands were as rare as uranium, would we have been justified in asserting that wildfires are likely impossible?

One reason why neither your argument nor my analogy turned out to be correct: even if one material is out of a narrow band of possible parameters, there are many other materials that could be in it. If our atmosphere was low-oxygen enough to make wood noncombustable, we might see more plants safely accumulating more volatile tissues instead. If other laws of physics made uranium too stable to use in technology, perhaps in that universe fermium would no longer be too unstable to survive in nature.

Comment author: orthonormal 18 April 2013 09:17:51PM 3 points [-]

Good counter-analogy, and awesome Wikipedia article. Thanks!

Comment author: Eliezer_Yudkowsky 18 April 2013 05:37:21PM 6 points [-]

Consider also the nature of the first heap: Purified uranium and a graphite moderator in such large quantities that the neutron multiplication factor was driven just over one. Elements which were less stable than uranium decayed earlier in Earth's history; elements more stable than this would not be suitable for fission. But the heap produced plutonium by its internal reactions, which could be purified chemically and then fizzed. All this was a difficult condition to obtain, but predictable that human intelligence would seek out such points in possibility-space selectively and create them - that humans would create exotic intermediate conditions not existing in nature, by which the remaining sorts of materials would fizz for the first time, and that such conditions indeed might be expected to exist, because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention, with a wide space of possibilities for which elements you could try. Or to then simplify this conclusion: "Of course it wouldn't exist in nature! Those bombs went off a long time ago, we'll have to build a slightly different sort! We're not restricted to bombs that grow on trees." By such reasoning, if you had attended to it, you might have correctly agreed with Szilard, and been correctly skeptical of Fermi's hypothetical counterargument.

Not taking into account that engineering intelligence will be applied to overcome the first hypothetical difficulty is, indeed, a source of systematic directional pessimistic bias in long-term technological forecasts. Though in this case it was only a decade. I think if Fermi had said that things were 30 years off and Szilard had said 10, I would've been a tad more sympathetic toward Fermi because of the obvious larger reference class - though I would still be trying not to update my brain in the opposite direction from the training example.

Comment author: private_messaging 19 April 2013 09:43:39PM *  4 points [-]

because among some of the materials not eliminated by 5 billion years, there would be some unstable enough to decay in 50 billion years, and these would be just-barely-non-fizzing and could be pushed along a little further by human intervention

Except there aren't any that are not eliminated by, say, 10 billion years. And even 40 million years eliminate everything you can make a nuke out of except U235 . This is because besides fizzling, unstable nuclei undergo this highly asymmetric spontaneous fission known as alpha decay.

Comment author: komponisto 18 April 2013 06:33:52AM *  4 points [-]

If you view the 90% number as an upper bound, with a few bits' worth of error bars, it doesn't look like such a strong claim. If Szilard and Fermi both agreed that the probability of the bad scenario was 10% or more, then it may well have been dumb luck that Szilard's estimate was higher. Most of the epistemic work would have been in promoting the hypothesis to the 10% "attention level" in the first place.

(Of course, maybe Fermi didn't actually do that work himself, in which case it might be argued that this doesn't really apply; but even if he was anchoring on the fact that others brought it to his attention, that was still the right move.)

Comment author: Eliezer_Yudkowsky 18 April 2013 05:14:09PM -2 points [-]

I suppose if we postulate that Szilard and Rabi did better by correlated dumb luck, then we can avoid learning anything from this example, yes.