"Continental drift" is usually the go-to example. For one, the mechanism originally proposed was complete nonsense...
There was a pretty solid basis for believing that 2-dimensional crystals were thermodynamically unstable and thus couldn't exist. Then in 2004 Geim and Novoselov did it (isolated graphene for the first time) and people had to re-scrutinize the theory, since it was obviously wrong somehow. It turns out that the previous theory was correct for 2D crystals of essentially infinite size, but it seems to not apply for non-infinite crystals. At least that is how it was explained to me once by a theorist on the subject.
The opening paragraph of this paper cites the relevant literature: http://cdn.intechopen.com/pdfs/40438/InTech-The_cherenkov_effect_in_graphene_like_structures.pdf
Single-layer Graphene is really really unstable and if you let it sit free, readily scrolls up and is very hard to get unstuck. In this sense, Landau's impossibility proof is entirely correct.
And that's why we don't use free-standing graphene without a frame, for just about anything. The closest we get is graphene oxide dissolved in a liquid, or extremely extremely tiny platelets that don't really deserve to be called crystals.
The pessimism about non-usefulness of graphene lay entirely in forgetting that you could put it on a backing or stretch it out (or thinking that it would lose its interesting properties if you did the former), and that was not justifiable at all.
Lord Kelvin was wrong but was he pessimistic? He wasn't saying we could never know the answer, or visit the sun, or anything like that. Yes, he guessed wrongly, and too low, but it doesn't seem to be the case that 'underestimating a quantity' is pessimism. If nothing else, the quantity might be 'number of babies killed'.
The claim that the Sun revolves around the Earth. If the Earth revolved around the Sun, there would have been a parallax in the observations of stars from different positions in the orbit. There was no observable parallax, so Earth probably didn't revolve around the Sun.
Off the top of my head, how about the Landau Pole? A famous and usually right genius calculated that the gauge theories of quantum fields are a dead end, and set the Soviet and to some degree Western physics a few years back, if I recall correctly. His calculation was not wrong, he simply missed the alternate possibilities.
EDIT: hmm, I'm having trouble locating any links discussing the negative effects of the Landau pole discovery on the QED research.
We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight.
This isn't what you asked for, but I might as well enumerate a few of these examples, for everyone's benefit. For the field of AI research:
"You can build a machine to draw [logical] conclusions for you, but I think you can never build a machine that will draw [probabilistic] inferences."
George Pólya (1954), ch. 15 — a few decades before the probabilistic revolution in AI.
...[Machines
[Machines] cannot play chess any more than they can play football.
Technically, he was correct.
Taube did not mean "Machines cannot be made to choose good chess moves" (a claim that has, indeed, been amply falsified). Here's a bit more context, from the linked paper.
[...] there are analog relationships in real chess -- such as the emptiness of a line [...] which cannot be directly handled by any digital machine. These analog relationships can be approximated digitally [...] in order to determine whether a given line is empty [...] such a set of calculations is not identical to the visual recognition that the space between two pieces is empty. A large part of the enjoyment of chess [...] derives from its deployment or topological character, which a machine cannot handle except by elimination. If game is used in the usual sense -- that is, as it was used before the word was redefined by computer enthusiasts with nothing more serious to do -- it is possible to state categorically that machines cannot play games. They cannot play chess any more than they can play football.
Taube's point, if I'm not misunderstanding him grossly, is that part of what it means to play a game of chess is (not merely to choose moves repeatedly until the game is over, but) to have somethin...
You accuse lukeprog of being misleading in taking a quote from a mere "librarian", and as we all know, a librarian is a harmless drudge who just shelves books, hence
it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.
I accuse you of being highly misleading in at least two ways here:
Mortimer Taube turns out to be the kind of 'librarian' who exemplifies this; the little byline to his letter about "Documentation Incorporated" should have been an indicator that maybe he was more than just a random schoolhouse librarian stamping in kids' books, but because you did not see fit to add any background on what sort of 'librarian' Taube was, I will:
...He is on the list of the 100 most important leaders in Library and I
Here is another famous example:Chandrasekhar's limit. Eddington rejected the idea of black holes ("I think there should be a law of Nature to prevent a star from behaving in this absurd way!"). Says wikipedia:
Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing.
I guess this ...
The general success rate of breakthroughs is pretty damn low, and so I'd argue that most examples of "invalid" pessimism (excluding some stupid ones coming from scientists you never heard of before coming across a quote, and excluding things like PR campaigning by Edison), viewed in the context of almost all breakthroughs failing for some reason you can't anticipate, are not irrational but simply reflect absence of strong evidence in favour of success (and absence of strong evidence against unknown obstacles), at the time of assessment (and corre...
I'm not sure if this is justifiable or just an old-fashioned blunder...
On the subject of stars, all investigations which are not ultimately reducible to simple visual observations are…necessarily denied to us… We shall never be able by any means to study their chemical composition.
-- August Comte, 1835
I'm leaning towards "blunder" myself...
Yeah, blunder. Wikipedia says:
In the 1820s both John Herschel and William H. F. Talbot made systematic observations of salts using flame spectroscopy. In 1835, Charles Wheatstone reported that different metals could be easily distinguished by the different bright lines in the emission spectra of their sparks, thereby introducing an alternative mechanism to flame spectroscopy.
it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235
This has interesting repercussions for Fermi's paradox.
I posted the following in a quotes page a few months back. I don't know how justifiable these were, and these are only questionably pessimism, but there may be some interesting examples in this. In particular, my light knowledge of the subject suggests that there really were extremely compelling reasons to disregard Feynman's formulation of QED for many years after it was first introduced.
...It is interesting to note that Bohr was an outspoken critic of Einstein's light quantum (prior to 1924), that he mercilessly denounced Schrodinger's equation, discourag
Here's an example of the 'opposite' - a case of unjustifiable correct optimism:
Columbus knew the Earth was round but should also have known the radius of the Earth and size of Eurasia well enough to know that the voyage East to Asia was simply impossible with the ships and supplies he went with. It seems to have turned out OK for him, though.
This is probably not a very useful example and I wouldn't be surprised to see that there were plenty more of these examples.
Kuhn's Structure of Scientific Revolutions is all about how an old scientific approach is often more right than the new school -- fits the data better, at least in the areas widely acknowledged to be central. Only later does the new approach become refined enough to fit the data better.
Thomas Malthus' view that in the long run we will always be stuck in (what we now call) the Malthusian trap. He would have been right if not for the sustained growth given to us by the industrial revolution.
it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust).
All is such a strong word unless supplemented with qualifiers. I question the plausibility the arguments at supporting that absolute. The route "wait for an extra century or two of particle physics research and spend a few trillion producing the initial seed stock" would still be available.
In context, Fermi was considering something rather more short-term: WW2.
That said, he may not have scoped his statement to such a small scale.
This appears to me to be an instance of a common error: assuming that when someone says something, they intended every inference you find it natural to make from it.
It's a common error indeed, and one that is justifiable when enough other people draw that error. Yeah Hitler said to kill all the Jews, but he really meant to kill the Jew inside, not real Jews. If I may quote your other comment:
(I agree that private_messaging's comment is extremely silly, and I regret the fact that what I wrote seems to have encouraged it.)
Indeed.
If you don't especially care what I actually think, then what the hell are you doing putting words into my mouth about how librarians are uninteresting low-status unintellectual drudges? (Which, just in case it needs saying again, in no way resemble my actual opinion.)
Right, because you just threw that in for no reason...
I'm sure you can find definitions according to which Taube's work was "science".
And I even gave several. Feel free to deal with the examples; do you think computer science and AI are not 'science'?
Here's an extreme example: Richard Dawkins is on record as accepting the term "cultural Christian" as applying to him. I would accordingly not say that RD cannot be construed as 'Christian' no matter how broadly defined -- but, none the less, for most purposes describing him as a Christian would be silly.
I don't see what's the least bit silly about describing him a a "cultural Christian" especially if he accepts the label. He was indeed raised in a Christian culture and implicitly accepts a lot of the background beliefs like belief in guilt and sin (heck, I still think in those terms to some degree and say things like 'goddamn it'); even if we don't go quite as far as Moldbug in diagnosing Dawkins as holding to a puritanical secular Christanity, the influence is ineradicable. There is no view from nowhere.
Ian Bostridge has a doctorate in history, and spent some time as an academic historian. However, I would not now call him a historian but a singer. (Or, more specifically, a tenor.)
Wow, so not only is he a trained historian who has published & defended his doctorate of original research, you describe him as actually having been in academia post-graduate school, and you still won't describe him as a historian? Would I describe him as a historian? Heck yes. Because if I won't even grant that description to Bostridge, I don't know who the heck I would grant it to. You know, describing someone as a historian is not committing to describing him as a 'great historian' or a 'ground-breaking historian' or a 'famous historian'. You don't need to be Marvin Minsky to be called 'an AI researcher' and you don't need to be a pre-eminent figure to be described as a worker in a field. Even a bad programmer is still a 'programmer'; someone who has moved up into management is still a programmer even if they haven't written a large program in years.
Angela Merkel has a PhD in physics, but I wouldn't now call her a physicist but a politician (or, perhaps, some more august term along those lines).
From Wikipedia: "After being awarded a doctorate (Dr. rer. nat.) for her thesis on quantum chemistry,[17] she worked as a researcher and published several papers."
But no, all that is chopped liver because gjm doesn't think she's a physicist/chemist.
George Soros has a PhD in philosophy but I wouldn't call him a philosopher.
I imagine Soros would be disappointed to hear that; his Popperian philosophy grounds his 'reflexivity' on which he has written extensively and believes can significantly influence economics as it's currently practiced.
So: no, the fact that someone got a PhD in philosophy in 1935 is not sufficient reason to call them a philosopher in 1960.
It is more than sufficient, Taube had excellent training (the University of Chicago, especially in the 1930s thanks to Adler & Hutchinson, was a philosophy powerhouse, and still is to some extent - ranked #24 in the Anglosphere by Leiter), received his PhD, kept up with the issues both as a practitioner and commenter, and was reportedly working on a philosophy book when he died. He was a philosopher. And your other examples were hardly better.
On flipping the bozo bit
Before you bother to read any of what follows, I would be grateful if you would answer the following question: Have you, in fact, bozo-bitted me? Because I've been proceeding on the assumption that it is in principle possible for us to have a reasoned discussion, but that's looking less and less true, and if I'm wasting my time here then I'd prefer to stop.
On librarians and librarianship
Unless I misunderstand you badly, you are arguing either that I have been lying constantly about this or that I am appallingly unaware of my own opi...
In an erratum to my previous post on Pascalian wagers, it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust). If this is the case then Fermi's estimate of a "ten percent" probability of nuclear weapons may have actually been justifiable because nuclear weapons were almost impossible (at least without particle accelerators) - though it's not totally clear to me why "10%" instead of "2%" or "50%" but then I'm not Fermi.
We're all familiar with examples of correct scientific skepticism, such as about Uri Geller and hydrino theory. We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight. Before this occasion I could only think offhand of one other famous example of erroneous scientific pessimism that was not in defiance of the default extrapolation of existing models, namely Lord Kelvin's careful estimate from multiple sources that the Sun was around sixty million years of age. This was wrong, but because of new physics - though you could make a case that new physics might well be expected in this case - and there was some degree of contrary evidence from geology, as I understand it - and that's not exactly the same as technological skepticism - but still. Where there are sort of two, there may be more. Can anyone name a third example of erroneous scientific pessimism whose error was, to the same degree, not something a smarter scientist could've seen coming?
I ask this with some degree of trepidation, since by most standards of reasoning essentially anything is "justifiable" if you try hard enough to find excuses and then not question them further, so I'll phrase it more carefully this way: I am looking for a case of erroneous scientific pessimism, preferably about technological impossibility or extreme difficulty, where it seems clear that the inverse case for possibility would've been weaker if carried out strictly with contemporary knowledge, after exploring points and counterpoints. (So that relaxed standards for "justifiability" will just produce even more justifiable cases for the technological possibility.) We probably should also not accept as "erroneous" any prediction of technological impossibility where it required more than, say, seventy years to get the technology.