In an erratum to my previous post on Pascalian wagers, it has been plausibly argued to me that all the roads to nuclear weapons, including plutonium production from U-238, may have bottlenecked through the presence of significant amounts of Earthly U235 (apparently even the giant heap of unrefined uranium bricks in Chicago Pile 1 was, functionally, empty space with a scattering of U235 dust). If this is the case then Fermi's estimate of a "ten percent" probability of nuclear weapons may have actually been justifiable because nuclear weapons were almost impossible (at least without particle accelerators) - though it's not totally clear to me why "10%" instead of "2%" or "50%" but then I'm not Fermi.
We're all familiar with examples of correct scientific skepticism, such as about Uri Geller and hydrino theory. We also know many famous examples of scientists just completely making up their pessimism, for example about the impossibility of human heavier-than-air flight. Before this occasion I could only think offhand of one other famous example of erroneous scientific pessimism that was not in defiance of the default extrapolation of existing models, namely Lord Kelvin's careful estimate from multiple sources that the Sun was around sixty million years of age. This was wrong, but because of new physics - though you could make a case that new physics might well be expected in this case - and there was some degree of contrary evidence from geology, as I understand it - and that's not exactly the same as technological skepticism - but still. Where there are sort of two, there may be more. Can anyone name a third example of erroneous scientific pessimism whose error was, to the same degree, not something a smarter scientist could've seen coming?
I ask this with some degree of trepidation, since by most standards of reasoning essentially anything is "justifiable" if you try hard enough to find excuses and then not question them further, so I'll phrase it more carefully this way: I am looking for a case of erroneous scientific pessimism, preferably about technological impossibility or extreme difficulty, where it seems clear that the inverse case for possibility would've been weaker if carried out strictly with contemporary knowledge, after exploring points and counterpoints. (So that relaxed standards for "justifiability" will just produce even more justifiable cases for the technological possibility.) We probably should also not accept as "erroneous" any prediction of technological impossibility where it required more than, say, seventy years to get the technology.
Taube did not mean "Machines cannot be made to choose good chess moves" (a claim that has, indeed, been amply falsified). Here's a bit more context, from the linked paper.
Taube's point, if I'm not misunderstanding him grossly, is that part of what it means to play a game of chess is (not merely to choose moves repeatedly until the game is over, but) to have something like the same experience as a human player has: seeing the spatial relationships between the pieces, for example. He thinks that's something machines fundamentally cannot do, and that is why he thinks machines cannot play chess.
Now, for the avoidance of doubt, I think he was badly wrong about all that. Someone blind from birth can learn to play chess, and I hope Taube wouldn't really want to say that such a player isn't really playing chess because she isn't having the same visual/spatial experiences as a sighted player. And most likely one day computers (or some other artificially constructed machines) will be having experiences every bit as rich and authentic as humans have. (Taube wrote a book claiming this was impossible. I haven't seen it myself, but from what little I've read about it its arguments were very weak.)
But his main claim about machines here isn't one that's been nicely falsified by later events. We have machines that do a very good job of evaluating positions and choosing moves, but he never claimed that that was impossible. We don't yet have machines that play chess in the very strong sense he's demanding, or even the weaker sense of using anything closely analogous to human visual perception to play. (I suppose you might say that programs using a "bitboard" representation are doing something a little along those lines, but somehow I doubt Taube would have been convinced.)
... Also, Taube wasn't a scientist or a computer expert or a chess expert or even a philosopher. He was a librarian. A librarian is a fine thing to be, but it doesn't confer the kind of expertise that would make it surprising or even very interesting for Taube to have been wrong here.
Would a chess program that has a table of all the lines on the board that keeps track of whether they are empty or not and that uses that table as part of its move choosing algorithm qualify? If not, I think we might be into qualia territory when it comes to making sense of how exactly a human is recognizing the emptiness of a line and that program isn't.