Justifiable Erroneous Scientific Pessimism
In an erratum to my previous post on Pascalian wagers, it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust). If this is the case then Fermi's estimate of a "ten percent" probability of nuclear weapons may have actually been justifiable because nuclear weapons were almost impossible (at least without particle accelerators) - though it's not totally clear to me why "10%" instead of "2%" or "50%" but then I'm not Fermi.
We're all familiar with examples of correct scientific skepticism, such as about Uri Geller and hydrino theory. We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight. Before this occasion I could only think offhand of one other famous example of erroneous scientific pessimism that was not in defiance of the default extrapolation of existing models, namely Lord Kelvin's careful estimate from multiple sources that the Sun was around sixty million years of age. This was wrong, but because of new physics - though you could make a case that new physics might well be expected in this case - and there was some degree of contrary evidence from geology, as I understand it - and that's not exactly the same as technological skepticism - but still. Where there are sort of two, there may be more. Can anyone name a third example of erroneous scientific pessimism whose error was, to the same degree, not something a smarter scientist could've seen coming?
I ask this with some degree of trepidation, since by most standards of reasoning essentially anything is "justifiable" if you try hard enough to find excuses and then not question them further, so I'll phrase it more carefully this way: I am looking for a case of erroneous scientific pessimism, preferably about technological impossibility or extreme difficulty, where it seems clear that the inverse case for possibility would've been weaker if carried out strictly with contemporary knowledge, after exploring points and counterpoints. (So that relaxed standards for "justifiability" will just produce even more justifiable cases for the technological possibility.) We probably should also not accept as "erroneous" any prediction of technological impossibility where it required more than, say, seventy years to get the technology.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (116)
"Continental drift" is usually the go-to example. For one, the mechanism originally proposed was complete nonsense...
They didn't have a mechanism at all until subduction and hence plate tectonics was discovered. The expanding earth theory was actually considered not implausible by geologists for quite a while - it didn't have anything like a plausible mechanism, but neither did continental drift. I was surprised to discover how recent this was.
There was a pretty solid basis for believing that 2-dimensional crystals were thermodynamically unstable and thus couldn't exist. Then in 2004 Geim and Novoselov did it (isolated graphene for the first time) and people had to re-scrutinize the theory, since it was obviously wrong somehow. It turns out that the previous theory was correct for 2D crystals of essentially infinite size, but it seems to not apply for non-infinite crystals. At least that is how it was explained to me once by a theorist on the subject.
The opening paragraph of this paper cites the relevant literature: http://cdn.intechopen.com/pdfs/40438/InTech-The_cherenkov_effect_in_graphene_like_structures.pdf
Single-layer Graphene is really really unstable and if you let it sit free, readily scrolls up and is very hard to get unstuck. In this sense, Landau's impossibility proof is entirely correct.
And that's why we don't use free-standing graphene without a frame, for just about anything. The closest we get is graphene oxide dissolved in a liquid, or extremely extremely tiny platelets that don't really deserve to be called crystals.
The pessimism about non-usefulness of graphene lay entirely in forgetting that you could put it on a backing or stretch it out (or thinking that it would lose its interesting properties if you did the former), and that was not justifiable at all.
Lord Kelvin was wrong but was he pessimistic? He wasn't saying we could never know the answer, or visit the sun, or anything like that. Yes, he guessed wrongly, and too low, but it doesn't seem to be the case that 'underestimating a quantity' is pessimism. If nothing else, the quantity might be 'number of babies killed'.
It was pessimistic in the sense that under his estimate the sun was steadily cooling and so we'd all freeze to death long before the real sun will present us any trouble.
Did he give an estimate of when we'd all freeze to death?
He estimated the sun was no more than 20 million years old, and presumably did not expect it to last for more than a few tens of millions of years more.
Not that I know of. Gravitational collapse is a really lousy, short-term source of energy, which is why he gave such a shorter estimate. Still on the scale of millions of years, I think.
Off the top of my head, how about the Landau Pole? A famous and usually right genius calculated that the gauge theories of quantum fields are a dead end, and set the Soviet and to some degree Western physics a few years back, if I recall correctly. His calculation was not wrong, he simply missed the alternate possibilities.
EDIT: hmm, I'm having trouble locating any links discussing the negative effects of the Landau pole discovery on the QED research.
The claim that the Sun revolves around the Earth. If the Earth revolved around the Sun, there would have been a parallax in the observations of stars from different positions in the orbit. There was no observable parallax, so Earth probably didn't revolve around the Sun.
*there would have been a parallax given assumptions at the time regarding the distance of the stars.
I've wondered though: if there were no planets besides Earth would we have persisted as geocentrists until the 19th century?
If there were no celestial bodies but Earth and the sun, we would have been just as correct as heliocentrists.
I don't think that's right.
The center of mass for the Earth-sun system is inside the sun; so, yeah, the heliocentrists wouldn't be "just as correct".
If the two masses were equal, then Earth and Sun would orbit a point that was equidistant to them; and in that scenario heliocentrists and geocentrists would be equally wrong....
Why privilege the center of mass as the reference point? Do we need to find the densest concentration of mass in the known universe to determine what we call the punctum fixum and what we call the punctum mobile?
As far as I can tell, most of the local universe revolves around me. That may be a common human misconception, seeing as I'm not a black hole, if we only go by centers of mass. But do we have to?
(Also, "densest concentration of mass" would probably be in the bible belt.)
I think the center of mass thing is a bit of a red herring here. While velocity and position are all relative, rotation is absolute. You can determine if you're spinning without reference to the outside world. For example, imagine a space station you spin for "gravity". You can tell how fast it's spinning without looking outside by measuring how much gravity there is.
You can work in earth-stationary coordinates, there will just be some annoying odd terms in your math as a result (it's a non-inertial reference frame).
Technically, no you can't. Per EY's points on Mach's principle, spinning yourself around (with the resulting apparent movement of stars and feeling of centrifugal stresses) is observationally equivalent to the rest of the universe conspiring to rotate around you oppositely.
The c.g. of the earth/sun solar system would likewise lack a privileged position in such a world.
Is that correct? Spinning implies rotation implies acceleration, which I'd always thought could be detected without external reference points.
Without taking a stance on Mach's principle or that specific question of observational equivalence, what about a spinning body in an otherwise empty universe? As an extreme example, my own body could spin only so fast before tearing itself apart. Surely this holds even if I'm floating in an otherwise utterly empty universe?
This is addressed later in the article, very well IMHO. Let me just give the relevant excerpts:
I agree that it's at least quite plausible (as per your post, it's not proven to follow from GR) that if the universe spun around you, it might be exactly the same as if you were spinning. However, if there's no background at all, then I'm pretty sure the predictions of GR are unambiguous. If there's no preferred rotation, then what do you predict to happen when you spin newton's bucket at different rates relative to each other?
EDIT: Also, although now I'm getting a bit out of my league, I believe that even in the massive external rotating shell case, the effect is miniscule.
EDIT 2: See this comment.
Are you sure you linked the right comment? That's just someone talking about centripetal vs centrifugal.
I thought that parallax argument was applied to the stars, not the Sun?
Yeah, that's what I meant. (No parallax in star observations -> the Earth isn't moving -> the Sun is revolving around the Earth.)
That's a justifiable error, but I don't see how it's pessimistic.
"Pessimistic" is a loaded term and I'm not sure if it's all that useful in the context of this discussion in the first place.
It's crucial to the original point that Eliezer was making, which was differentiating technological pessimism from technological optimism.
This isn't technology, and though it makes a difference to the universe as a whole, it wouldn't be better or worse for us either way.
Here is another famous example:Chandrasekhar's limit. Eddington rejected the idea of black holes ("I think there should be a law of Nature to prevent a star from behaving in this absurd way!"). Says wikipedia:
I guess this is not quite what you are asking for, since the math was on Chandrasekhar's side, and Eddington was pinning his hopes on "new physics". To be fair, recent discussions about horizon firewalls could be such new physics.
Eddington erroneously dismissed M(white dwarf) > Mlimit ⇒ "a black hole" , but didn't he correctly anticipate new physics?
Do event horizons (Finkelstein, 1958) not prevent nature from behaving in "that absurd way", so far as we can ever observe?
It's hard to know what Eddington meant by "absurd way". Presumably he meant that this hypothetical law would prevent matter from collapsing into nothing. Possibly if Chandrasekhar had figured out the strange properties of the event horizon back in 1935 and had emphasized that whatever weird stuff is happening beyond the final Chandrasekhar limit is hidden from view, Eddington would not have reacted as harshly. But that took another 20-30 years, even though the relevant calculations require at most 3rd year college math. Besides, Chandrasekhar's strength was in mathematics, not physics, and he could not compete with Eddington in physics intuition (which happened to be quite wrong in this particular case).
I'm not sure if this is justifiable or just an old-fashioned blunder...
-- August Comte, 1835
I'm leaning towards "blunder" myself...
Yeah, blunder. Wikipedia says:
It wasn't until the 1850s that Ångström discovered that elements both emit and absorb light at characteristic wavelengths, which is what spectroscopic analysis of stars is based on, so I'm leaning toward justifiable.
Well, the first half seems approximately correct. The second sentence should have begun with "And by clever application of this means we shall...".
Even if you interpret “visual” as ‘mediated by photons’, there's such a thing as neutrino astronomy.
This has interesting repercussions for Fermi's paradox.
Yes, particularly in the context that you and I discussed earlier that intelligent life arising earlier might have had an easier time wiping itself out. Although the consensus there seemed to be that it wouldn't be a large enough difference to matter for serious filtration issues.
I posted the following in a quotes page a few months back. I don't know how justifiable these were, and these are only questionably pessimism, but there may be some interesting examples in this. In particular, my light knowledge of the subject suggests that there really were extremely compelling reasons to disregard Feynman's formulation of QED for many years after it was first introduced.
[Footnote to: "This was a most disturbing result. Niels Bohr (not for the first time) was ready to abandon the law of conservation of energy". The disturbing result refers to the observations of electron energies in beta-decay prior to hypothesizing the existence of neutrinos.]
-David Griffiths, Introduction to Elementary Particles, 2008 page 24
Kuhn's Structure of Scientific Revolutions is all about how an old scientific approach is often more right than the new school -- fits the data better, at least in the areas widely acknowledged to be central. Only later does the new approach become refined enough to fit the data better.
To him(Kuhn) evidence don't maintain old paradigms statuos quo, but persuasion. Old fellas making remarks about the virtues of their theory. New folks in academia have to convince a good amount of people to make the new theory relevant.
Yes, "Science advances one funeral at a time", but this, from Wikipedia, is a pretty good summary of a typical "scientific revolution":
"...Copernicus' model needed more cycles and epicycles than existed in the then-current Ptolemaic model, and due to a lack of accuracy in calculations, Copernicus's model did not appear to provide more accurate predictions than the Ptolemy model. Copernicus' contemporaries rejected his cosmology, and Kuhn asserts that they were quite right to do so: Copernicus' cosmology lacked credibility."
Thomas Malthus' view that in the long run we will always be stuck in (what we now call) the Malthusian trap. He would have been right if not for the sustained growth given to us by the industrial revolution.
Not clear his view is erroneous given suitable values for "long run".
How so? Last I checked, human populations could still pop out children if they wanted to faster than the average real global growth rate since the IR of ~2%.
What's relevant to whether we are in a Malthusian trap is the actual birth rate, not what the birth rate would be if people wanted to have far more children.
I'll be more explicit then: the 'sustained growth' is almost irrelevant since per the usual Malthusian mechanisms it is quickly eliminated. What made Malthus wrong, what he was pessimistic about, was whether people would exercise "moral restraint" - in other words, he didn't think the demographic transition would happen. It did, and that's why we're wealthy.
But how do you know it's the "moral restraint" that averted the Malthusian catastrophe, rather than the innovations (by the additional humans) that amplified the effective carrying capacity of available resources? In fact, the moral restraint could be keeping us closer to the catastrophe than if we had been producing more humans.
Because population growth can outpace innovation growth. This is not a hard concept.
I know. But your post seemed to be taking the position in favor of population growth (change) as the relevant factor rather than innovation. I was asking why you (seemed to have) thought that.
Population growth and innovation are two sides of a scissor: innovation drives potential per capita up, population growth drives it down. But the blade of population growth is far bigger than the blade of innovation growth, because everyone can pump out children and few can pump out innovation.
Hence, innovation can be seen as necessary - but it is not sufficient, in the absence of changes to reproductive patterns.
Okay, that's where I disagree: Each additional person is also another coin toss (albeit heavily stacked against us) in the search for innovators. The question then is whether the possible innovations, weighted by probability of a new person being an innovator (and to what extent) favors more or fewer people.
There's no reason why one effect is necessarily greater than the other and hence no reason for the presumption of one blade being larger.
There is no a priori reason, of course. We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf.
Yet, the world we actually live in doesn't look like that. A woman can (and historically, many have) spend her life in the kitchen making no such technological contributions but having 10 kids. (In fact, one of my great-grandmothers did just that.) It was not China or India which launched the Scientific and Industrial Revolutions.
I can't prove this, but I believe that in the United States and Western Europe we would still be rich (in the sense that calorie deprivation wouldn't pose a health risk to the vast majority of the population) if the birth rate had stayed the same since Malthus's time.
That makes no sense to argue: Malthus's time was part of the demographic transition. Of course I would agree that if the demographic transition continued post-Malthus - as it did - we would see higher per capita (as we did).
But look up the extremely high birth rates of some times and places (you can borrow some figures from http://www.marathon.uwc.edu/geography/demotrans/demtran.htm ), apply modern United States & Western Europe infant and child mortality rates, and tell me whether the population growth rate is merely much higher than the real economic growth rates of ~2% or extraordinarily higher. You may find it educational.
But I believe that from the point of view of maximizing the per person wealth of the United States and Western Europe the population growth rate has been much, much too low since the industrial revolution. (I admittedly have no citations to back this up.)
Maybe. That's not the same thing as what you said initially, though.
We'll just evolve for restraint not to work any more.
Yes, that's the question: is the demographic transition temporary? I've brought it up before: http://lesswrong.com/lw/5dl/is_kiryas_joel_an_unhappy_place/
(Was there a SMBC comic or something about men evolving a condom-breaking mechanism in their penis?)
We're rapidly evolving condom-not-putting-on mechanism in the brain.
I was always under the impression that what thwarted his hypothesis was the rise of effective and widespread birth control. I remember reading one of his works and noting that it was operating on the assumption that, to reduce birthrate to sustainable levels, sex would have to be reduced, and that was unlikely. It is unlikely, but it's also mostly decoupled from childbirth now, at least in the developed world.
Have I misinterpreted something here?
I believe he considered the possibility of birth control, referring to it as "immorality".
"Watch out for that cliff!"
"It looks pretty far off, and besides, we're turning left soon anyway."
"But we could keep accelerating!"
Your reply seems completely irrelevant to the Malthusian point that population growth can always exceed total factor production, and so it is population growth - or lack of growth - which dominates and determines per capita.
The general success rate of breakthroughs is pretty damn low, and so I'd argue that most examples of "invalid" pessimism (excluding some stupid ones coming from scientists you never heard of before coming across a quote, and excluding things like PR campaigning by Edison), viewed in the context of almost all breakthroughs failing for some reason you can't anticipate, are not irrational but simply reflect absence of strong evidence in favour of success (and absence of strong evidence against unknown obstacles), at the time of assessment (and corresponding regression towards the mean rate of success). They're merely not as hindsight resistant as Fermi's example. You look back at history seeing things that succeeded. Go read archive of some old journals, and note the zillions of amazing breakthroughs that did not pan out.
If bomb did not rely on unusual U235 , Fermi would not have been irrational about 10% probability to emission of secondary neutrons from fission - it is something that most likely either happens for all fissions, or does not happen for any fissions, so the clever "there would be one" argument doesn't work irrespective of U235. U235 is not the most general valid objection, it's just the objection for which sources are easiest to find. No one did the silly task of writing out that production of secondary neutrons is not a statistically independent fact across different nuclei, and we're lucky that there's just 1 nucleus so we don't have to, either.
I'm having trouble understanding your second paragraph. This is probably just due to missing background knowledge on my part, but would you mind explaining what you mean by:
and
Thanks!
There was a really silly argument about Fermi's 10% estimate , scattered over several threads (which OP talks about). Yudkowsky been arguing that Fermi's estimate was too low. He came up with the idea that surely there would have been one element (out of many) that would have worked so the probability should have been higher, that was wrong because a: its not as if some element's fissions released neutrons and some didn't, and b: there was only 1 isotope to start from (U-235), not many.
Do all elements' fissions release neutrons?
Yes. The issue is that the argument "look at periodic table, it's so big, there would be at least one" requires that the fact of fission releasing neutrons would be assumed independent across nuclei.
Gotcha, thanks.
This isn't what you asked for, but I might as well enumerate a few of these examples, for everyone's benefit. For the field of AI research:
George Pólya (1954), ch. 15 — a few decades before the probabilistic revolution in AI.
Mortimer Taube (1960) — not long before computers began to regularly dominate amateur and then expert chess players. (Edit: this one seems wrong)
Satosi Watanabe (1974) — a couple decades before both supervised and unsupervised machine learning took off.
Also, Hubert Dreyfus mocked the capabilities of chess computers, and compared AI to alchemy, in Dreyfus (1965) — a mere two years before he was defeated by the chess computer Mac Hack.
Technically, he was correct.
I like the idea of football (soccer) played by quadrupeds.
Taube did not mean "Machines cannot be made to choose good chess moves" (a claim that has, indeed, been amply falsified). Here's a bit more context, from the linked paper.
Taube's point, if I'm not misunderstanding him grossly, is that part of what it means to play a game of chess is (not merely to choose moves repeatedly until the game is over, but) to have something like the same experience as a human player has: seeing the spatial relationships between the pieces, for example. He thinks that's something machines fundamentally cannot do, and that is why he thinks machines cannot play chess.
Now, for the avoidance of doubt, I think he was badly wrong about all that. Someone blind from birth can learn to play chess, and I hope Taube wouldn't really want to say that such a player isn't really playing chess because she isn't having the same visual/spatial experiences as a sighted player. And most likely one day computers (or some other artificially constructed machines) will be having experiences every bit as rich and authentic as humans have. (Taube wrote a book claiming this was impossible. I haven't seen it myself, but from what little I've read about it its arguments were very weak.)
But his main claim about machines here isn't one that's been nicely falsified by later events. We have machines that do a very good job of evaluating positions and choosing moves, but he never claimed that that was impossible. We don't yet have machines that play chess in the very strong sense he's demanding, or even the weaker sense of using anything closely analogous to human visual perception to play. (I suppose you might say that programs using a "bitboard" representation are doing something a little along those lines, but somehow I doubt Taube would have been convinced.)
... Also, Taube wasn't a scientist or a computer expert or a chess expert or even a philosopher. He was a librarian. A librarian is a fine thing to be, but it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.
You accuse lukeprog of being misleading in taking a quote from a mere "librarian", and as we all know, a librarian is a harmless drudge who just shelves books, hence
I accuse you of being highly misleading in at least two ways here:
Mortimer Taube turns out to be the kind of 'librarian' who exemplifies this; the little byline to his letter about "Documentation Incorporated" should have been an indicator that maybe he was more than just a random schoolhouse librarian stamping in kids' books, but because you did not see fit to add any background on what sort of 'librarian' Taube was, I will:
So to summarize: he was a trained philosopher and tech startup co-founder who invented new information technology and handled documentation tasks who was familiar with the cybernetics literature and traveled in the same circles as people like Vannevar Bush.
And you write
!
An upvote for correctly contextualizing what Taube wrote, and a mental downvote for being lazy or deceptive in your final paragraph.
I really can't think of a polite way to say this, so:
Bullshit.
I wasn't accusing Luke of anything; I was disagreeing with him. Disagreement is not accusation. When I want to make an accusation, I will make an accusation, like this one: You have mischaracterized what I wrote, and made totally false insinuations about my opinions and attitudes, and I have to say I'm pretty shocked to see someone as generally excellent as you behaving in such a way.
I do not think, and I did not say, and I had not the slightest intention of implying, that "a librarian is a harmless drudge who just shelves books".
Allow me to remind you how Luke's comment begins. The boldface emphasis is mine.
Taube was, despite his many excellent qualities, not a scientist as that term is generally understood, and he was, despite his many excellent qualities, not working in "the field of AI research".
(Yes, I know the Wikipedia page says he was "a true innovator in the field of science". Reading what it says he did, though, I really can't see that what he did was science. For the avoidance of doubt, and in the probably overoptimistic hope that saying this will stop you pulling the same what-a-snob-this-person-is move as you already did above, I don't think that "not science" is in any way the same sort of thing as "not valuable" or "not important" or "not difficult". What the creators of (say) the Firefox web browser did was important and valuable and difficult, but happens not to be science. What Beethoven did was important and valuable and difficult, but happens not to be science. What Martin Luther King did was important and valuable and difficult, but happens not to be science.)
Pointing this out doesn't mean I think there's anything wrong with being a librarian. When I said "a librarian is a fine thing to be", I meant it. (And, for the avoidance of doubt, it is my opinion both when "librarian" means "someone who shelves books in a library" and when it means "a world-class expert on organizing information in catalogues".)
Now, having said all that, I should add that you are quite right about one thing: when I said that Taube was neither a computer expert nor a philosopher, I was oversimplifying. (Not least because I hadn't looked deeply into Taube's career.) He was an important innovator in the use of punched cards for document indexing, which is quite a bit like being a computer expert; and he was a PhD in philosophy, which is quite a bit like being a philosopher. None the less, I stand by what I said: neither being a world-class expert in document indexing, nor knowing a lot about punched-card reading machinery, nor being a PhD in philosophy, seems to me to be the kind of expertise that makes it particularly startling if one's wrong about whether machines can play chess.
(And, once again, for the avoidance of doubt, I am not in the least trying to belittle his expertise and creativity. I just don't see that they were the kind of expertise and creativity that make it startling for someone to be wrong about the possibilities of computer chess-playing.)
[EDITED to clarify a bit of wording and add some emphasis. ... And again, later, to add a missing negative; oops. Also, while I'm here, two other remarks. 1: I regret the confrontational tone this exchange has taken; but I don't see any way I could have responded sufficiently forcefully to the accusations levelled at me without perpetuating it. 2: I see a lot of downvotes are flying around in this subthread. For the record, I haven't cast any.]
Thankyou for your research. I was mislead by the grandparent.
"Eliezer" should be "lukeprog".
Hah, whups. And so it goes - you correct Eliezer's lack of examples, gjm corrects your description of Taube, I correct gjm's description of Taube, and you correct my description of gjm's description...
Would a chess program that has a table of all the lines on the board that keeps track of whether they are empty or not and that uses that table as part of its move choosing algorithm qualify? If not, I think we might be into qualia territory when it comes to making sense of how exactly a human is recognizing the emptiness of a line and that program isn't.
Yup. I strongly suspect that Taube was in fact "into qualia territory", or something along those lines, when he wrote that.
Here's an example of the 'opposite' - a case of unjustifiable correct optimism:
Columbus knew the Earth was round but should also have known the radius of the Earth and size of Eurasia well enough to know that the voyage East to Asia was simply impossible with the ships and supplies he went with. It seems to have turned out OK for him, though.
This is probably not a very useful example and I wouldn't be surprised to see that there were plenty more of these examples.
All is such a strong word unless supplemented with qualifiers. I question the plausibility the arguments at supporting that absolute. The route "wait for an extra century or two of particle physics research and spend a few trillion producing the initial seed stock" would still be available.
In context, Fermi was considering something rather more short-term: WW2.
That said, he may not have scoped his statement to such a small scale.
One of many suitable and sufficient qualifiers that could make the arguments plausible.
This blog post claims that only a few years before the Wright brother's success, the consensus was that flying machines would necessarily have to be less dense than air (like hot air balloons).