When I hear scientists talk about Thomas Kuhn, he sounds very reasonable. Scientists have theories that guide their work. Sometimes they run into things their theories can’t explain. Then some genius develops a new theory, and scientists are guided by that one. So the cycle repeats, knowledge gained with every step.

When I hear philosophers talk about Thomas Kuhn, he sounds like a madman. There is no such thing as ground-level truth! Only theory! No objective sense-data! Only theory! No basis for accepting or rejecting any theory over any other! Only theory! No scientists! Only theories, wearing lab coats and fake beards, hoping nobody will notice the charade!

I decided to read Kuhn’s The Structure Of Scientific Revolutions in order to understand this better. Having finished, I have come to a conclusion: yup, I can see why this book causes so much confusion.

At first Kuhn’s thesis appears simple, maybe even obvious. I found myself worrying at times that he was knocking down a straw man, although of course we have to read the history of philosophy backwards and remember that Kuhn may already be in the water supply, so to speak. He argues against a simplistic view of science in which it is merely the gradual accumulation of facts. So Aristotle discovered a few true facts, Galileo added a few more on, then Newton discovered a few more, and now we have very many facts indeed.

In this model, good science cannot disagree with other good science. You’re either wrong – as various pseudoscientists and failed scientists have been throughout history, positing false ideas like “the brain is only there to cool the blood” or “the sun orbits the earth”. Or you’re right, your ideas are enshrined in the Sacristry Of Settled Science, and your facts join the accumulated store that passes through the ages.

Simple-version-of-Kuhn says this isn’t true. Science isn’t just facts. It’s paradigms – whole ways of looking at the world. Without a paradigm, scientists wouldn’t know what facts to gather, how to collect them, or what to do with them once they had them. With a paradigm, scientists gather and process facts in the ways the paradigm suggests (“normal science”). Eventually, this process runs into a hitch – apparent contradictions, or things that don’t quite fit predictions, or just a giant ugly mess of epicycles. Some genius develops a new paradigm (“paradigm shift” or “scientific revolution”). Then the process begins again. Facts can be accumulated within a paradigm. And many of the facts accumulated in one paradigm can survive, with only slight translation effort, into a new paradigm. But scientific progress is the story of one relatively-successful and genuinely-scientific effort giving way to a different and contradictory relatively-successful and genuinely-scientific effort. It’s the story of scientists constantly tossing out one another’s work and beginning anew.

This gets awkward because paradigms look a lot like facts. The atomic theory – the current paradigm in a lot of chemistry – looks a lot like the fact “everything is made of atoms and molecules”. But this is only the iceberg’s tip. Once you have atomic theory, chemistry starts looking a lot different. Your first question when confronted with an unknown chemical is “what is the molecular structure?” and you have pretty good ideas for how to figure this out. You are not particularly interested in the surface appearance of chemicals, since you know that iron and silver can look alike but are totally different elements; you may be much more interested in the weight ratio at which two chemicals react (which might seem to the uninitiated like a pretty random and silly thing to care about). If confronted with a gas, you might ask things like “which gas is it?” as opposed to thinking all gases are the same thing, or wondering what it would even mean for two gases to be different. You can even think things like “this is a mixture of two different types of gas” without agonizing about how a perfectly uniform substance can be a mixture of anything. If someone asks you “How noble and close to God would say this chemical sample is?” you can tell them that this is not really a legitimate chemical question, unless you mean “noble” in the sense of the noble gases. If someone tells you a certain chemical is toxic because toxicity is a fundamental property of its essence, you can tell them that no, it probably has to do with some reaction it causes or fails to cause with chemicals in the body. And if someone tells you that a certain chemical has changed into a different chemical because it got colder, you can tell them that cold might have done something to it, it might even have caused it to react with the air or something, but chemicals don’t change into other chemicals in a fundamental way just because of the temperature. None of these things are obvious. All of them are hard-won discoveries.

A field without paradigms looks like the STEM supremacist’s stereotype of philosophy. There are all kinds of different schools – Kantians, Aristotelians, Lockeans – who all disagree with each other. There may be progress within a school – some Aristotelian may come up with a really cool new Aristotelian way to look at bioethics, and all the other Aristotelians may agree that it’s great – but the field as a whole does not progress. People will talk past one another; the Aristotelian can go on all day about the telos of the embryo, but the utilitarian is just going to ask what the hell a telos is, why anyone would think embryos have one, and how many utils the embryo is bringing people. “Debates” between the Aristotelian and the utilitarian may not be literally impossible, but they are going to have to go all the way to first principles, in a way that never works. Kuhn interestingly dismisses these areas as “the fields where people write books” – if you want to say anything, you might as well address it to a popular audience for all the good other people’s pre-existing knowledge will do you, and you may have to spend hundreds of pages explaining your entire system from the ground up. He throws all the social sciences in this bin – you may read Freud, Skinner, and Beck instead of Aristotle, Locke, and Kant, but it’s the same situation.

A real science is one where everyone agrees on a single paradigm. Newtonianism and Einsteinianism are the same kind of things as Aristotelianism and utilitarianism; but in 1850, everybody believed the former, and in 1950, the latter.

I got confused by this – is Aristotelian philosophy a science? Would it be one if the Aristotelians forced every non-Aristotelian philosopher out of the academy, so that 100% of philosophers fell in line behind Aristotle? I think Kuhn’s answer to this is that it’s telling that Aristotelians haven’t been able to do this (at least not lately); either Aristotle’s theories are too weak, or philosophy too intractable. But all physicists unite behind Einstein in a way that all philosophers cannot behind Aristotle. Because of this, all physicists mean more or less the same thing when they talk about “space” and “time”, and they can work together on explaining these concepts without constantly arguing to each other about what they mean or whether they’re the right way to think about things at all (and a Newtonian and Einsteinian would not be able to do this with each other, any more than an Aristotelian and utilitarian).

So how does science settle on a single paradigm when other fields can’t? Is this the part where we admit it’s because science has objective truth so you can just settle questions with experiments?

This is very much not that part. Kuhn doesn’t think it’s anywhere near that simple, for a few reasons.

First, there is rarely a single experiment that one paradigm fails and another passes. Rather, there are dozens of experiments. One paradigm does better on some, the other paradigm does better on others, and everyone argues over which ones should or shouldn’t count.

For example, one might try to test the Copernican vs. Ptolemaic worldviews by observing the parallax of the fixed stars over the course of a year. Copernicus predicts it should be visible; Ptolemy predicts it shouldn’t be. It isn’t, which means either the Earth is fixed and unmoving, or the stars are unutterably unimaginably immensely impossibly far away. Nobody expected the stars to be that far away, so advantage Ptolemy. Meanwhile, the Copernicans posit far-off stars in order to save their paradigm. What looked like a test to select one paradigm or the other has turned into a wedge pushing the two paradigms even further apart.

What looks like a decisive victory to one side may look like random noise to another. Did you know weird technologically advanced artifacts are sometimes found encased in rocks that our current understanding of geology says are millions of years old? Creationists have no trouble explaining those – the rocks are much younger, and the artifacts were probably planted by nephilim. Evolutionists have no idea how to explain those, and default to things like “the artifacts are hoaxes” or “the miners were really careless and a screw slipped from their pocket into the rock vein while they were mining”. I’m an evolutionist and I agree the artifacts are probably hoaxes or mistakes, even when there is no particular evidence that they are. Meanwhile, probably creationists say that some fossil or other incompatible with creationism is a hoax or a mistake. But that means the “find something predicted by one paradigm but not the other, and then the failed theory comes crashing down” oversimplification doesn’t work. Find something predicted by one paradigm but not the other, and often the proponents of the disadvantaged paradigm can – and should – just shrug and say “whatever”.

In 1870, flat-earther Samuel Rowbotham performed a series of experiments to show the Earth could not be a globe. In the most famous, he placed several flags miles apart along a perfectly straight canal. Then he looked through a telescope and was able to see all of them in a row, even though the furthest should have been hidden by the Earth’s curvature. Having done so, he concluded the Earth was flat, and the spherical-earth paradigm debunked. Alfred Wallace (more famous for pre-empting Darwin on evolution) took up the challenge, and showed that the bending of light rays by atmospheric refraction explained Rowbotham’s result. It turns out that light rays curve downward at a rate equal to the curvature of the Earth’s surface! Luckily for Wallace, refraction was already a known phenomenon; if not, it would have been the same kind of wedge-between-paradigms as the Copernicans having to change the distance to the fixed stars.

It is all nice and well to say “Sure, it looks like your paradigm is right, but once we adjust for this new idea about the distance to the stars / the refraction of light, the evidence actually supports my paradigm”. But the supporters of old paradigms can do that too! The Ptolemaics are rightly mocked for adding epicycle after epicycle until their system gave the right result. But to a hostile observer, positing refraction effects that exactly counterbalance the curvature of the Earth sure looks like adding epicycles. At some point a new paradigm will win out, and its “epicycles” will look like perfectly reasonable adjustments for reality’s surprising amount of detail. And the old paradigm will lose, and its “epicycles” will look like obvious kludges to cover up that it never really worked. Before that happens…well, good luck.

Second, two paradigms may not even address or care about the same questions.

Let’s go back to utilitarianism vs. Aristotelianism. Many people associate utilitarianism with the trolley problem, which is indeed a good way to think about some of the issues involved. It might be tempting for a utilitarian to think of Aristotelian ethics as having some different answer to the trolley problem. Maybe it does, I don’t know. But Aristotle doesn’t talk about how he would solve whatever the 4th-century BC equivalent of the trolley problem was. He talks more about “what is the true meaning of justice?” and stuff like that. While you can twist Aristotle into having an opinion on trolleys, he’s not really optimizing for that. And while you can make utilitarianism have some idea what the true meaning of justice is, it’s not really optimized for that either.

An Aristotelian can say their paradigm is best, because it does a great job explicating all the little types and subtypes of justice. A utilitarian can say their paradigm is best, because it does a great job telling you how to act in various contrived moral dilemmas.

It’s actually even worse than this. The closest thing I can think of to an ancient Greek moral dilemma is the story of Antigone. Antigone’s uncle declares that her traitorous dead brother may not be buried with the proper rites. Antigone is torn between her duty to obey her uncle, and her desire to honor her dead brother. Utilitarianism is…not really designed for this sort of moral dilemma. Is ignoring her family squabbles and trying to cure typhus an option? No?

But then utilitarianism’s problems are deeper than just “comes to a different conclusion than ancient Greek morals would have”. The utilitarian’s job isn’t to change the ancient Greek’s mind about the answer to a certain problem. It’s to convince him to stop caring about basically all the problems he cares about, and care about different problems instead.

Third, two paradigms may disagree on what kind of answers are allowed, or what counts as solving a problem.

Kuhn talks about the 17th century “dormitive potency” discourse. Aristotle tended to explain phenomena by appealing to essences; trees grew because it was “in their nature” to grow. Descartes gets a bad rap for inventing dualism, but this is undeserved – what he was really doing was inventing the concept of “matter” as we understand it, a what-you-see-is-what-you-get kind of stuff with no hidden essences that responds mechanically to forces (and once you have this idea, you naturally need some other kind of stuff to be the mind). With Cartesian matter firmly in place, everyone made fun of Aristotle for thinking he had “solved” the “why do trees grow?” question by answering “because it is in their nature”, and this climaxed with the playwright Moliere portraying a buffoonish doctor who claimed to have discovered how opium put people to sleep – it was because it had a dormitive potency!

In Aristotle’s view of matter, saying “because it’s their essence” successfully answers questions like “why do trees grow?”. The Cartesian paradigm forbade this kind of answer, and so many previously “solved” problems like why trees grow became mysterious again – a step backwards, sort of. For Descartes, you were only allowed to answer questions if you could explain how purely-mechanical matter smashing against other purely-mechanical matter in a billiard-ball-like way could produce an effect; a more virtuous and Descartes-aware doctor explained opium’s properties by saying opium corpuscles must have a sandpaper-like shape that smooths the neurons!

Then Newton discovered gravity and caused an uproar. Gravity posits no corpuscles jostling other corpuscles. It sounds almost Aristotelian: “It is the nature of matter to attract other matter”. Newton was denounced as trying to smuggle occultism into science. How much do you discount a theory for having occult elements? If some conception of quantum theory predicts the data beautifully, but says matter behaves differently depending on whether someone’s watching it or not, is that okay? What if it says that a certain electron has a 50% chance of being in a certain place, full stop, and there is no conceivable explanation for which of the two possibilities is realized, and you’re not even allowed to ask the question? What if my explanation for dark matter is “invisible gremlins”? How do you figure out when you need to relax your assumptions about what counts as science, versus when somebody is just cheating?

A less dramatic example: Lavoisier’s theory of combustion boasts an ability to explain why some substances gain weight when burned; they are absorbing oxygen from the air. A brilliant example of an anomaly explained, which proves the superiority of combustion theory to other paradigms that cannot account for the phenomenon? No – “things shouldn’t randomly gain weight” comes to us as a principle of the chemical revolution of which Lavoisier was a part:

In the seventeenth century, [an explanation of weight gain] seemed unnecessary to most chemists. If chemical reactions could alter the volume, color, and texture of the ingredients, why should they not alter weight as well? Weight was not always taken to be the measure of quantity of matter. Besides, weight-gain on roasting remained an isolated phenomenon. Most natural bodies (eg wood) lose weight on roasting as the phlogiston theory was later to say they should.

In previous paradigms, weight gain wasn’t even an anomaly to be explained. It was just a perfectly okay thing that might happen. It’s only within the constellation of new methods and rules we learned around Lavoisier’s time, that Lavoisier’s theories solved anything at all.

So how do scientists ever switch paradigms?

Kuhn thinks it’s kind of an ugly process. It starts with exasperation; the old paradigm is clearly inadequate. Progress is stagnating.

Awareness [of the inadequacy of geocentric astronomy] did come. By the thirteenth century Alfonso X could proclaim that if God had consulted him when creating the universe, he would have received good advice. In the sixteenth century, Copernicus’ coworker, Domenico da Novara, held that no system so cumbersome and inaccurate as the Ptolemaic had become could possibly be true of nature. And Copernicus himself wrote in the Preface to the De Revolutionibus that the astronomical tradition he inherited had finally created only a monster.

Then someone proposes a new paradigm. In its original form, it is woefully underspecified, bad at matching reality, and only beats the old paradigm in a few test cases. For whatever reason, a few people jump on board. Sometimes the new paradigm is simply more mathematically elegant, more beautiful. Other times it’s petty things, like a Frenchman invented the old paradigm and a German the new one, and you’re German. Sometimes it’s just that there’s nothing better. These people gradually expand the new paradigm to cover more and more cases. At some point, the new paradigm explains things a little better than the old paradigm. Some of its predictions are spookily good. The old paradigm is never conclusively debunked. But the new paradigm now has enough advantages that more and more people hop on the bandwagon. Gradually the old paradigm becomes a laughingstock, people forget the context in which it ever made sense, and it is remembered only as a bunch of jokes about dormitive potency.

But now that it’s been adopted and expanded and reached the zenith of its power, this is the point at which we can admit it’s objectively better, right?

For a better treatment of this question than I can give, see Samzdat’s Science Cannot Count To Red. But my impression is that Kuhn is not really willing to say this. I think he is of the “all models are wrong, some are useful” camp, thinks of paradigms as models, and would be willing to admit a new paradigm may be more useful than an old one.

Can we separate the fact around which a paradigm is based (like “the Earth orbits the sun”) from the paradigm itself (being a collection of definitions of eg “planet” and “orbit”, ways of thinking, mathematical methods, and rules for what kind of science will and won’t be accepted)? And then say the earth factually orbits the sun, and the paradigm is just a useful tool that shouldn’t be judged objectively? I think Kuhn’s answer is that facts cannot be paradigm-independent. A medieval would not hear “the Earth orbits the sun” and hear the same claim we hear (albeit, in his view wrong). He would, for example, interpret it to mean the Earth was set in a slowly-turning crystal sphere with the sun at its center. Then he might ask – where does the sphere intersect the Earth? How come we can’t see it? Is Marco Polo going to try to travel to China and then hit a huge invisible wall halfway across the Himalayas? And what about gravity? My understanding is the Ptolemaics didn’t believe in gravity as we understand it at all. They believed objects had a natural tendency to seek the center of the universe. So if the sun is more central, why isn’t everything falling into the sun? To a medieval the statement “the Earth orbits the sun” has a bunch of common-sense disproofs everywhere you look. It’s only when attached to the rest of the Copernican paradigm that it starts to make sense.

This impresses me less than it impresses Kuhn. I would say “if you have many false beliefs, then true statements may be confusing in that they seem to imply false statements – but true statements are still objectively true”. Perhaps I am misunderstanding Kuhn’s argument here; the above is an amalgam of various things and not something Kuhn says outright in the book. But whatever his argument, Kuhn is not really willing to say that there are definite paradigm-independent objective facts, at least not without a lot of caveats.

So where is the point at which we admit some things are objectively true and that’s what this whole enterprise rests on?

Kuhn only barely touches on this, in the last page of the book:

Anyone who has followed the argument this far will nevertheless feel the need to ask why the evolutionary process should work. What must nature, including man, be like in order that science be possible at all? Why should scientific communities be able to reach a firm consensus unattainable in other fields? Why should consensus endure across one paradigm change after another? And why should paradigm change invariably produce an instrument more perfect in any sense than those known before? From one point of view those questions, excepting the first, have already been answered. But from another they are as open as they were when this essay began. It is not only the scientific community that must be special. The world of which that community is a part must also possess quite special characteristics, and we are no closer than we were at the start to knowing what these must be. That problem— What must the world be like in order that man may know it?— was not, however, created by this essay. On the contrary, it is as old as science itself, and it remains unanswered. But it need not be answered in this place.

At this point I lose patience. Kuhn is no longer being thought-provoking, he’s being disingenuous. IT’S BECAUSE THERE’S AN OBJECTIVE REALITY, TOM. YOU DON’T HAVE TO BE SO COY ABOUT IT. “OHHHHH, WHAT COULD POSSIBLY EXPLAIN WHY SCIENCE BEHAVES THE WAY IT WOULD IF OBJECTIVE REALITY EXISTS, NOBODY WILL EVER KNOW, LET’S JUST NEVER ANSWER IT”. Get a life.

Honestly this decreases my trust in some of what’s come before. Maybe he wrote all those sections about incommensurable paradigms because paradigms really are that incommensurable. Or maybe it’s because he thinks he’s playing some kind of ridiculous game where the first person to admit the existence of objective reality loses.

II.

A lot of the examples above are mine, not Kuhn’s. Some of them even come from philosophy or other nonscientific fields. Shouldn’t I have used the book’s own examples?

Yes. But one of my big complaints about this book is that, for a purported description of How Science Everywhere Is Always Practiced, it really just gives five examples. Ptolemy/Copernicus on astronomy. Alchemy/Dalton on chemistry. Phlogiston/Lavoisier on combustion. Aristotle/Galileo/Newton/Einstein on motion. And ???/Franklin/Coulomb on electricity.

It doesn’t explain any of the examples. If you don’t already know what Coulomb’s contribution to electricity is and what previous ideas he overturned, you’re out of luck. And don’t try looking it up in a book either. Kuhn says that all the books have been written by people so engrossed in the current paradigm that they unconsciously jam past scientists into it, removing all evidence of paradigm shift. This made parts of the book a little beyond my level, since my knowledge of Coulomb begins and ends with “one amp per second”.

Even saying Kuhn has five examples is giving him too much credit. He usually brings in one of his five per point he’s trying to make, meaning that you never get a really full view of how any of the five examples exactly fit into his system.

And all five examples are from physics. Kuhn says at the beginning that he wished he had time to talk about how his system fits biology, but he doesn’t. He’s unsure whether any of the social sciences are sciences at all, and nothing else even gets mentioned. This means we have to figure out how Kuhn’s theory fits everything from scattershot looks at the history of electricity and astronomy and a few other things. This is pretty hard. For example, consider three scientific papers I’ve looked at on this blog recently:

Cipriani, Ioannidis, et al perform a meta-analysis of antidepressant effect sizes and find that although almost all of them seem to work, amitriptyline works best.

Ceballos, Ehrlich, et al calculate whether more species have become extinct recently than would be expected based on historical background rates; after finding almost 500 extinctions since 1900, they conclude they definitely have.

Terrell et al examine contributions to open source projects and find that men are more likely to be accepted than women when adjusted for some measure of competence they believe is appropriate, suggesting a gender bias.

What paradigm is each of these working from?

You could argue that the antidepressant study is working off of the “biological psychiatry” paradigm, a venerable collection of assumptions that can be profitably contrasted with other paradigms like psychoanalysis. But couldn’t a Hippocratic four-humors physician of a thousand years ago done the same thing? A meta-analysis of the effect sizes of various kinds of leeches for depression? Sure, leeches are different from antidepressants, but it doesn’t look like the belief in biological psychiatry is affecting anything about the research other than the topic. And although the topic is certainly important, Kuhn led me to expect something more profound than that. Maybe the paradigm is evidence-based-medicine itself, the practice of doing RCTs and meta-analyses on things? I think this is a stronger case, but a paradigm completely divorced from the content of what it’s studying is exactly the sort of weird thing that makes me wish Kuhn had included more than five examples.

As for the extinction paper, surely it can be attributed to some chain of thought starting with Cuvier’s catastrophism, passing through Lyell, and continuing on to the current day, based on the idea that the world has changed dramatically over its history and new species can arise and old ones disappear. But is that “the” paradigm of biology, or ecology, or whatever field Ceballos and Lyell are working in? Doesn’t it also depend on the idea of species, a different paradigm starting with Linnaeus and developed by zoologists over the ensuing centuries? It look like it dips into a bunch of different paradigms, but is not wholly within any.

And the open source paper? Is “feminism” a paradigm? But surely this is no different than what would be done to investigate racist biases in open source. Or some right-winger looking for anti-Christian biases in open source. Is the paradigm just “looking for biases in things?”

What about my favorite trivial example, looking both ways when you cross the street so you don’t get hit by a bus? Is it based on a paradigm of motorized transportation? Does it use assumptions like “buses exist” and “roads are there to be crossed”? Was there a paradigm shift between the bad old days of looking one way before crossing, and the exciting new development of looking both ways before crossing? Is this really that much more of a stretch than calling looking for biases in things a paradigm?

Outside the five examples Kuhn gives from the physical sciences, identifying paradigms seems pretty hard – or maybe too easy. Is it all fractal? Are there overarching paradigms like atomic theory, and then lower-level paradigms like organic chemistry, and then tiny subsubparadigms like “how we deal with this one organic compound”? Does every scientific experiment use lots of different paradigms from different traditions and different levels? This is the kind of thing I wish Kuhn’s book answered instead of just talking about Coulumb and Copernicus over and over again.

III.

In conclusion, all of this is about predictive coding.

It’s the same thing. Perception getting guided equally by top-down expectations and bottom-up evidence. Oh, I know what you’re thinking. “There goes Scott again, seeing predictive coding in everything”. And yes. But also, Kuhn does everything short of come out and say “When you guys get around to inventing predictive coding, make sure to notice that’s what I was getting at this whole time.”

Don’t believe me? From the chapter Anomaly And The Emergence Of Scientific Discovery (my emphasis, and for “anomaly”, read “surprisal”):

The characteristics common to the three examples above are characteristic of all discoveries from which new sorts of phenomena emerge. Those characteristics include: the previous awareness of anomaly, the gradual and simultaneous emergence of both observational and conceptual recognition, and the consequent change of paradigm categories and procedures often accompanied by resistance.

There is even evidence that these same characteristics are built into the nature of the perceptual process itself. In a psychological experiment that deserves to be far better known outside the trade, Bruner and Postman asked experimental subjects to identify on short and controlled exposure a series of playing cards. Many of the cards were normal, but some were made anomalous, e.g., a red six of spades and a black four of hearts. Each experimental run was constituted by the display of a single card to a single subject in a series of gradually increased exposures. After each exposure the subject was asked what he had seen, and the run was terminated by two successive correct identifications.

Even on the shortest exposures many subjects identified most of the cards, and after a small increase all the subjects identified them all. For the normal cards these identifications were usually correct, but the anomalous cards were almost always identified, without apparent hesitation or puzzlement, as normal. The black four of hearts might, for example, be identified as the four of either spades or hearts. Without any awareness of trouble, it was immediately fitted to one of the conceptual categories prepared by prior experience. One would not even like to say that the subjects had seen something different from what they identified. With a further increase of exposure to the anomalous cards, subjects did begin to hesitate and to display awareness of anomaly. Exposed, for example, to the red six of spades, some would say: That’s the six of spades, but there’s something wrong with it— the black has a red border. Further increase of exposure resulted in still more hesitation and confusion until finally, and sometimes quite suddenly, most subjects would produce the correct identification without hesitation. Moreover, after doing this with two or three of the anomalous cards, they would have little further difficulty with the others. A few subjects, however, were never able to make the requisite adjustment of their categories. Even at forty times the average exposure required to recognize normal cards for what they were, more than 10 per cent of the anomalous cards were not correctly identified. And the subjects who then failed often experienced acute personal distress. One of them exclaimed: “I can’t make the suit out, whatever it is. It didn’t even look like a card that time. I don’t know what color it is now or whether it’s a spade or a heart. I’m not even sure now what a spade looks like. My God!” In the next section we shall occasionally see scientists behaving this way too.

Either as a metaphor or because it reflects the nature of the mind, that psychological experiment provides a wonderfully simple and cogent schema for the process of scientific discovery.

And from Revolutions As Changes Of World-View:

Surveying the rich experimental literature from which these examples are drawn makes one suspect that something like a paradigm is prerequisite to perception itself. What a man sees depends both upon what he looks at and also upon what his previous visual-conceptual experience has taught him to see. In the absence of such training there can only be, in William James’s phrase, “a bloomin’ buzzin’ confusion.” In recent years several of those concerned with the history of science have found the sorts of experiments described above immensely suggestive.

If you can read those paragraphs and honestly still think I’m just just irrationally reading predictive coding into a perfectly innocent book, I have nothing to say to you.

I think this is my best answer to the whole “is Kuhn denying an objective reality” issue. If Kuhn and the predictive coding people are grasping at the same thing from different angles, then both shed some light on each other. I think I understand the way that predictive coding balances the importance of pre-existing structures and categories with a preserved belief in objectivity. If Kuhn is trying to use something like the predictive coding model of the brain processing information to understand the way the scientific community as a whole processes it, then maybe we can import the same balance and not worry about it as much.

New Comment
30 comments, sorted by Click to highlight new comments since:

This is a nice summary of Kuhn's ideas from his SSR (with some really great examples). Your main question (where is in all this objectivity and how to get rid of relativism?) puzzled both Kuhn as well as the post-Kuhnian philosophers of science. In his later work (The Road Since Structure) Kuhn tried to answer these questions in more detail, leaning towards a Kantian interpretation of the world (roughly: even though we do not have an access to the world as such, the world does give a "resistance" to our attempts at forming knowledge about it, which is why not anything goes; a good guide for this is Paul Hoyningen-Huene's excellent book on Kuhn "Reconstructing scientific revolutions: Thomas S. Kuhn's philosophy of science").

I don't know enough about predictive coding to comment on that comparison, but here are two comments on some of the above issues:

1) While the shift from one paradigm to another often appears to be a matter of "mob psychology" (as Lakatos put it), Kuhn actually discusses elsewhere the process of 'persuasion' and 'translation' that the proponents of rivaling paradigm can employ. Even though scientists may belong to mutually incommensurable paradigms, the 'communication breakdown' can be avoided via these processes (for more on this see this, also available here).

2) Concerning the objectivity of the world, the reason why this issue is not so simple for Kuhn is that he rejects the idea of the "mind-independent world". This point is often misunderstood and either ignored or placed under Kuhn's 'obscure ideas' mainly because in his attempts to explicate it, Kuhn gets very close to the so-called continental philosophical style, which sometimes irks the shit out of analytically-minded philosophers ;) The following passage from a discussion on Kuhn may not make things much clearer without an additional context, but it points to the relevant parts of Kuhn's work on this and it hopefully shows why Kuhn doesn't accept a simple dichotomy between the mind-dependent and mind-independent world (bold emphasis added):

[According to Kuhn]
.. truth cannot be anything like correspondence to reality. I am not suggesting, let me emphasize, that there is a reality which science fails to get at. My point is rather that no sense can be made of the notion of reality as it has ordinarily functioned in philosophy of science. (Kuhn, 2000, p. 115)

Kuhn thus argues not only that the match between the mind and the reality that is independent from it is not assessable, but that this match is nonsensical.
But the natural sciences, dealing objectively with the real world (as they do), are generally held to be immune. Their truths (and falsities) are thought to transcend the ravages of temporal, cultural, and linguistic change. I am suggesting, of course, that they cannot do so. Neither the descriptive nor the theoretical language of natural science provides the bedrock such transcendence would require. (Ibid., p. 75)
The reasons for these claims need to be explicated in view of Kuhn’s discussion of the notion of world. First of all, Kuhn emphasizes the world-constitutive role of intentionality and mental representations (p. 103), of a lexicon that is always already in place (p. 86):
different languages impose different structures on the world . . . where the structure is different, the world is different. (Ibid., p. 52)
The world itself must be somehow lexicon-dependent. (Ibid., p. 77)
What is thus at stake is the notion of a mind-independent, or, in Putnam’s terms, ‘ready-made’ world. And for the reasons given above, this term is, for Kuhn, nonsensical. Nevertheless, he warns his readers that this does not imply that the world is somehow mind-dependent: ‘the metaphor of a mind-dependent world—like its cousin, the constructed or invented world—proves to be deeply misleading’ (ibid., p. 103).
How should the notion of world be treated then? Instead of the strict dichotomy between the mind-independent world and our representations of it, Kuhn proposes ‘a sort of post-Darwinian Kantianism. Like the Kantian categories, the lexicon supplies pre-conditions of possible experience’ (ibid., p. 104). And as the lexical categories change, both in a diachronous and a synchronous manner, ‘the world . . . alters with time and from one community to the next’ (ibid., p. 102). Kuhn compares a permanent, fixed, and stable foundation ‘underlying all these processes of differentiation and change’ to ‘Kant’s Ding an sich’, which ‘is ineffable, undescribable, undiscussable’ (ibid., p. 104). And what replaces the dichotomy of mind–language–thinking and the one big mind-independent world (ibid., p. 120) is the concept of ‘niche’: ‘the world is our representation of our niche’ (ibid., p. 103).
Those niches, which both create and are created by the conceptual and instrumental tools with which their inhabitants practice upon them, are as solid, real, resistant to arbitrary change as the external world was once said to be. (Ibid., p. 120)
Now, what has become of the notion of truth in Kuhn’s post-Darwinian Kantianism? Truth can at best be seen as having ‘only intra-theoretic applications’ (Kuhn, 1970, p. 266): ‘Evaluation of a statement’s truth values is, in short, an activity that can be conducted only with a lexicon already in place’ (Kuhn, 2000, p. 77). By contrast, ‘The ways of being-in-the-world which a lexicon provides are not candidates for true/false’ (ibid., p. 104). None of these ‘form[s] of life’, ‘practice[s]-in-the-world’ gives ‘privileged access to a real, as against an invented, world’ (ibid., pp. 103–104). Therefore the speech of theories becoming truer ‘has a vaguely ungrammatical ring: it is hard to know quite what those who use it have in mind’
(ibid., p. 115). Furthermore, if with Kuhn the sciences form a ‘complex but unsystematic structure of distinct specialties or species’ and therefore have to be ‘viewed as plural’ (ibid., p. 119), and if the niches
‘do not sum to a single coherent whole of which we and the practitioners of all the individual scientific specialties are inhabitants’ (ibid., p. 120), then ‘there is no basis for talk of science’s gradual elimination of all worlds excepting the single real one’ (ibid., p. 86).

This sums up some parts of late Kuhn's thoughts on the growth of knowledge and its non-additive character. Now, one can ask: but what does this practically mean? What kind of methodological guidelines does this give us? And this is where issues are perhaps not so surprising (or disturbing). I think the most important points here are:

1) a complex defeasible character of scientific models and theories (complex in the sense that falsifying a theory may not be a matter of deciding in view of one or two experiments, as discussed in the article; instead Kuhn speaks of the importance of 'epistemic values', such as scope, adequacy, simplicity, consistency, fruitfulness -- which guide scientists to prefer one theory over another, and which at the end of the day lead the community to replace one paradigm with another; this is closely related to the next point);

2) instead of assessing the truthfulness of scientific knowledge, post-Kuhnian philosophers of science prefer to speak of the assessment of their performance in terms of epistemic (or as sometimes called 'cognitive') values, based on empirical evidence (in other words, scientists are considered as accepting a theory not because it is 'true' or 'truth-like' but because it scores highly with respect to its predictive accuracy, explanatory scope, etc.

3) the presence of conceptual frameworks underlying scientific theories, which complicate their unification and integration (and which have inspired a whole range of accounts proposing 'scientific pluralism'), and which may also give rise to rational disagreements in science, make the learning and communication across paradigms cumbersome, etc.

Crossposted from SSC comments section

That problem— What must the world be like in order that man may know it?— was not, however, created by this essay. On the contrary, it is as old as science itself, and it remains unanswered. But it need not be answered in this place.

At this point I lose patience. Kuhn is no longer being thought-provoking, he’s being disingenuous. IT’S BECAUSE THERE’S AN OBJECTIVE REALITY, TOM.

I haven’t read Kuhn and I don’t know whether I’m interpreting em correctly, but to me it seems not that simple at all.

Saying there is an objective reality doesn’t explain why this reality is comprehensible. In statistical learning theory there are various analyses of what mathematical conditions must hold for it to be possible to learn a model from observations (i.e. so that you can avoid the no-free-lunch theorems) and how difficult it is to learn it, and when you add computational complexity considerations into the mix it becomes even more complicated. Our understanding of these questions is far from complete.

In particular, our ability to understand physics seems to rely on the hierarchical nature of physical phenomena. You can discover classical mechanics without knowing anything about molecules or quantum physics, you can discover atomic and molecular physics while knowing little about nuclear physics, and you can discover nuclear and particle physics without understanding quantum gravity (i.e. what happens to spacetime on the Planck scale). If the universe was s.t. it is impossible to compute the trajectory of a tennis ball without string theory, we might have never discovered any physics.

From looking at Conway's Game of Life, my intuition is that if a universe can support non-ontologically-fundamental Turing machines (I'm invoking anthropic reasoning), then it's likely to have phenomena analyzable at multiple hierarchical levels (beyond the looser requirement of being simple/compressible).

Basically, if a universe allows any reductionistic understanding at all (that's what I mean by calling the Turing Machine "non-ontologically-fundamental"), then the reductionist structure is probably a multi-layered one. Either zero reduction layers or lots, but not exactly one layer.

While re-reading things for the 2019 Review, I noticed this is followed up in johnswentworth's more recent self-review on the Evolution of Modularity:

The material here is one seed of a worldview which I've updated toward a lot more over the past year. Some other posts which involve the theme include Science in a High Dimensional World, What is Abstraction?, Alignment by Default, and the companion post to this one Book Review: Design Principles of Biological Circuits.

Two ideas unify all of these:

  1. Our universe has a simplifying structure: it abstracts well, implying a particular kind of modularity.
  2. Goal-oriented systems in our universe tend to evolve a modular structure which reflects the structure of the universe.

One major corollary of these two ideas is that goal-oriented systems will tend to evolve similar modular structures, reflecting the relevant parts of their environment. Systems to which this applies include organisms, machine learning algorithms, and the learning performed by the human brain. In particular, this suggests that biological systems and trained deep learning systems are likely to have modular, human-interpretable internal structure. (At least, interpretable by humans familiar with the environment in which the organism/ML system evolved.)

This post talks about some of the evidence behind this model: biological systems are indeed quite modular, and simulated evolution experiments find that circuits do indeed evolve modular structure reflecting the modular structure of environmental variations. The companion post reviews the rest of the book, which makes the case that the internals of biological systems are indeed quite interpretable.

On the deep learning side, researchers also find considerable modularity in trained neural nets, and direct examination of internal structures reveals plenty of human-recognizable features.

Going forward, this view is in need of a more formal and general model, ideally one which would let us empirically test key predictions - e.g. check the extent to which different systems learn similar features, or whether learned features in neural nets satisfy the expected abstraction conditions, as well as tell us how to look for environment-reflecting structures in evolved/trained systems.

If the universe was s.t. it is impossible to compute the trajectory of a tennis ball without string theory, we might have never discovered any physics.

Makes one wonder, what things we have not discovered, because they are in a such way?

I believe the term "chaotic" refers to those things. E.g. for an airplane's lift, there are higher-level-than-particle-modeling physical principles you can explain it with, but for a 1-month weather forecast you have to go down to particle modeling, or close.

The "What paradigm is each of these working from?" section seems like an interesting meta example of the thing Kuhn described, where you keep adding epicycles to Kuhn's "paradigm" paradigm.

This makes for an interesting companion read. I'm inclined to see the problem Kuhn identified as "functional fixedness, but for theories". Maybe because humans are cognitive misers, we prefer to re-use our existing mental representations, or make incremental improvements to them, rather than rework things from scratch. Rather than "there is no objective reality", I'd rather say "there is no lossless compression of reality (at least not one that's gonna fit in your brain)". Rather than "there is no truth", I'd rather say "our lossy mental representations sometimes generate wrong questions".

David Chapman recommends reading Kuhn's postscript to the second edition, responding to criticism in the first 7 years of the book's publication. Here's a PDF of the book, with the postscript starting at page 174. I'll add further commentary once I read it.

This post makes a much stronger case for the anti-Bayes side in Eliezer's "Science Doesn't Trust Your Rationality".

In my mind I definitely hold romantic notions of pure-and-perfect scientists, whose work makes crisp predictions we can all see and that changes everyone's minds. Yet it appears that Kuhn paints a far messier picture of science, to the extent that at no point does anyone, really, know what they're doing, and that their high-level models shouldn't be trusted to reach sane conclusions there (nor should I trust myself).

I update significantly toward the not-trusting-myself position reading this. I update downward on us getting AGI right the first time (i.e. building a paradigm that will produce aligned AGI that we trust in and that we're well-calibrated about that trust). I also increase my desire to study the history of science, math, philosophy, and knowledge.

Remember that Eliezer's version of Science vs Bayes is itself a paradigm. IMO it meshes imperfectly with Kuhn's ideas as Scott presents them in this post.

This post does a (probably) decent job of summarizing Kuhn's concept of paradigm shifts. I find paradigm shifts a useful way of thinking about the aggregation of evidence in complex domains.

The notion of paradigm shifts has felt pretty key to how I think about intellectual progress (which in turn means "how do I think about lesswrong?"). A lot of my thinking about this comes from listening to talks by Geoff Anders (huh, I just realized I was literally at a retreat organized by an org called Paradigm at the time, which was certainly not coincidence). 

In particular, I apply the paradigm-theory towards how to think about AI Alignment and Rationality progress, both of which are some manner of "pre-paradigmatic."

I think this post is a good writeup of the concepts. I found this post pretty helpful both for helping me to think about how paradigms might form, with lots of examples (apparently no thanks to Kuhn?), as well as what the limits of the paradigm-paradigm. 

I'm curious if there are other existing summaries that I could contrast this with.

Predictive Coding

One weirdness I observe while reading this is, 3/4 through the post, Scott suddenly brings up predictive coding. And I... think I'm supposed to have some context on that? (“There goes Scott again, seeing predictive coding in everything”.) I assume this was a hot-topic on Slatestarcodex 2019. He explains it well enough that I only feel lost for a minute, but it's pretty jarring. If this post ends up being a longterm reference for paradigm-thinking, it might make sense to edit a bit.

Book Reviews and the LW Review

I don't actually think "Is this a good fit for the Best Of Book?" is quite the question I want most people asking in these reviews. I'd rather reviewers just convey information about the post. But, I think it's worth thinking about it sometimes from a broader view of "okay, what is the Best Of Book supposed to do? What should the LW Review be doing if we're maximizing intellectual progress flow-throw."

I think book reviews are totally fine to include. I acknowledge it's a bit weird, but if a post is useful to read it's useful to read. This one in particular seems to do a lot of good interpretive/distillation effort.

But one thing that strikes me this year is thinking "hmm, there are actually a lot of book reviews up for nomination. And I could imagine that maybe if a number of them all got voted favorably, we might want to put together a special "book review book" that clustered them together.

Followup work

This post largely made we want to see a ton more work exploring lots of scientific paradigm evolutions, and see how well this model actually matched them. I feel like surely someone must have done this already (outside of LW). If not, I think that'd be a great contribution to human knowledge/theory. If someone DID already do this, I think writing a summary post connecting it to other LW topics would still be valuable.

I just think it's really important to engage with thinkers like Kuhn who challenge our core assumptions and this post engaged with him really well.

I think Kuhn’s answer is that facts cannot be paradigm-independent.

I liked this take on it, from Steven Horst’s Cognitive Pluralism, where he’s quoting some of Kuhn’s later writing (the italics are Horst quoting Kuhn):

A historian reading an out-of-date scientific text characteristically encounters passages that make no sense. That is an experience I have had repeatedly whether my subject is an Aristotle, a Newton, a Volta, a Bohr, or a Planck. It has been standard to ignore such passages or to dismiss them as products of error, ignorance, or superstition, and that response is occasionally appropriate. More often, however, sympathetic contemplation of the troublesome passages suggests a different diagnosis. The apparent textual anomalies are artifacts, products of misreading.
For lack of an alternative, the historian has been understanding words and phrases in the text as he or she would if they had occurred in contemporary discourse. Through much of the text that way of reading proceeds without difficulty; most terms in the historian’s vocabulary are still used as they were by the author of the text. But some sets of interrelated terms are not, and it is [the] failure to isolate those terms and to discover how they were used that has permitted the passages in question to seem anomalous. Apparent anomaly is thus ordinarily evidence of the need for local adjustment of the lexicon, and it often provides clues to the nature of that adjustment as well. An important clue to problems in reading Aristotle’s physics is provided by the discovery that the term translated ‘motion’ in his text refers not simply to change of position but to all changes characterized by two end points. Similar difficulties in reading Planck’s early papers begin to dissolve with the discovery that, for Planck before 1907, ‘the energy element hv’ referred, not to a physically indivisible atom of energy (later to be called ‘the energy quantum’) but to a mental subdivision of the energy continuum, any point on which could be physically occupied.
These examples all turn out to involve more than mere changes in the use of terms, thus illustrating what I had in mind years ago when speaking of the “incommensurability” of successive scientific theories. In its original mathematical use ‘incommensurability’ meant “no common measure,” for example of the hypotenuse and side of an isosceles right triangle. Applied to a pair of theories in the same historical line, the term meant that there was no common language into which both could be fully translated. (Kuhn 1989/2000, 9–10)
While scientific theories employ terms used more generally in ordinary language, and the same term may appear in multiple theories, key theoretical terminology is proprietary to the theory and cannot be understood apart from it. To learn a new theory, one must master the terminology as a whole: “Many of the referring terms of at least scientific languages cannot be acquired or defined one at a time but must instead be learned in clusters” (Kuhn 1983/2000, 211). And as the meanings of the terms and the connections between them differ from theory to theory, a statement from one theory may literally be nonsensical in the framework of another. The Newtonian notions of absolute space and of mass that is independent of velocity, for example, are nonsensical within the context of relativistic mechanics. The different theoretical vocabularies are also tied to different theoretical taxonomies of objects. Ptolemy’s theory classified the sun as a planet, defined as something that orbits the Earth, whereas Copernicus’s theory classified the sun as a star and planets as things that orbit stars, hence making the Earth a planet. Moreover, not only does the classificatory vocabulary of a theory come as an ensemble—with different elements in nonoverlapping contrast classes—but it is also interdefined with the laws of the theory. The tight constitutive interconnections within scientific theories between terms and other terms, and between terms and laws, have the important consequence that any change in terms or laws ramifies to constitute changes in meanings of terms and the law or laws involved with the theory [...]
While Kuhn’s initial interest was in revolutionary changes in theories about what is in a broader sense a single phenomenon (e.g., changes in theories of gravitation, thermodynamics, or astronomy), he later came to realize that similar considerations could be applied to differences in uses of theoretical terms between contemporary subdisciplines in a science (1983/2000, 238). And while he continued to favor a linguistic analogy for talking about conceptual change and incommensurability, he moved from speaking about moving between theories as “translation” to a “bilingualism” that afforded multiple resources for understanding the world—a change that is particularly important when considering differences in terms as used in different subdisciplines.

Interesting analogy here.

When I hear scientists talk about Thomas Kuhn, he sounds very reasonable. [...] When I hear philosophers talk about Thomas Kuhn, he sounds like a madman.

Yes, this! I remember I was extremely confused by the discourse around Kuhn. I'm not sure whether for me the impression was split into scientists vs. non-scientists, but I definitely felt like there was something weird about it and there were too sides to it, one that sounded potentially reasonable, and one that sounded clearly like relativism.

When taking a course on the book, I concluded that both perspectives were appropriate. One thing that went too far into relativism was Kuhn's insistence that there is no way to tell in advance which paradigm is going to be successful. His description of this is that you pick "teams" initially for all kinds of not-truth-tracking reasons, and you only figure out many years later whether your new paradigm will be winning or not.

But I'm not sure Kuhn even was (at least in The Structure of Scientific Revolutions) explicitly saying "No, you cannot do better than chance at picking sides." Rather, the weird thing is that I remember feeling like he was not explicitly asking that question, that he was just brushing it under the carpet. Likewise the lecturer of the course, a Kuhn expert, seemed to only be asking the question "How does (human-)science proceed?", and never "How should science proceed?"

One thing that went too far into relativism was Kuhn's insistence that there is no way to tell in advance which paradigm is going to be successful. His description of this is that you pick "teams" initially for all kinds of not-truth-tracking reasons, and you only figure out many years later whether your new paradigm will be winning or not.

This is a good point, though it's important to distinguish between assessing whether a paradigm is going to be successful (which may be impossible to say at the beginning of research) and assessing whether it is worthy of pursuit. The latter only means that for now, the paradigm seems promising, but of course, the whole research program may flop at some point. While Kuhn didn't address these problems in great detail, I linked in my previous comment to some papers that discuss his work with regard to these questions.

the lecturer of the course, a Kuhn expert, seemed to only be asking the question "How does (human-)science proceed?", and never "How should science proceed?"

It's a pity this issue wasn't explicitly discussed in the course you mention because it's actually really interesting. Some Kuhn scholars try to explain the relationship between the descriptive and the normative dimension you mention by bringing up the analogy with the grammar: just like we formulate a given grammar by looking at descriptive aspects of how the given language is used, this helps us to also formulate the normative aspects of how it should be used. Now, not everyone will agree about what this means when it comes to scientific inquiry, but I would defend the following claim: the normative has to be formulated within the boundaries of how science tends to evolve, where we may find issues that are problematic (for example, we may notice that scientists are insufficiently open-minded at times, or that sometimes they employ inadequate methods, etc.) and in view of which we may formulate some normative suggestions. In other words, the normative can't be formulated out of the blue, ignoring some important constraints which are hard to get rid of (e.g. the fact that different paradigms may come with different conceptual frameworks).

Even when cybernetics inspired a lot of very useful technology that we take for granted these days the field is extremely weak

In medicine, paradigms where human perception is used for information gathering didn't lose in academia for being not truth aligned. After Cochrane (2008) chiropratics achieve comparable results for backpain with their human perception based interventions but they aren't in the academy. Not to speak about the ridiculousness of caring about whether patients are placebo-blinded but not caring about whether they can perceive whether they get placebo.

Even when cybernetics inspired a lot of very useful technology in places like Xeros PARC that we take for granted these days the field has relatively little academic backing these days. The scientists who focus on weightloss with their physics-based approach of energy-in/energy-out still don't seem to be good at creating successful interventions. At the same time cybernetics-paradigm based interventions like Seth's Roberts Shangri La diet don't get any research.

The DSM-V isn't as popular as it is because it's truth-aligned. It's rather popular because it tries to be agnostic to the ground truth of the phenomena it describes.

You need a lot of hindsight bias to say that it was clear from the get go which paradigms were going to win over the last century.

You need a lot of hindsight bias to say that it was clear from the get go which paradigms were going to win over the last century.

Sure. And I think Kuhn's main point as summarized by Scott really does give a huge blow to the naive view that you can just compare successful predictions to missed predictions, etc.

But to think that you cannot do better than chance at generating successful new hypotheses is obviously wrong. There would be way too many hypotheses to consider, and not enough scientists to test them. From merely observing science's success, we can conclude that there has to be some kind of skill (Yudkowksy's take on this is here and here, among other places) that good scientists employ to do better than chance at picking what to work on. And IMO it's a strange failure of curiosity to not want to get to the bottom of this when studying Kuhn or the history of science.

Most science happens within scientific paradigms. A good scientist looks where progress could be made within his scientific paradigm and seeks to move science forward within it.

Paradigm changes are qualitatively different and betting on a new paradigm emerging paradigm requires different decision making.

What Eliezer says about Phlogiston is wrong. Phlogiston did pay it's rent and allowed chemistry to advance a lot from the alchemy that precedes it:

If this ash is reheated with charcoal the phlogiston is restored (according to Stahl) and with it the mercury. (In our view the charcoal removes the oxygen restoring the mercury). In a complex series of experiment Stahl turned sulphuric acid into sulphur and back again, explaining the changes once again through the removal and return of phlogiston. Through extension Stahl, an excellent experimental chemist, was able to explain, what we now know as the redox reactions and the acid-base reactions, with his phlogiston theory based on experiment and empirical observation. Stahl’s phlogiston theory was thus the first empirically based ‘scientific’ explanation of a large part of the foundations of chemistry.

But to think that you cannot do better than chance at generating successful new hypotheses is obviously wrong.

It would be an uncharitable reading of Kuhn to interpret him in that way. He does speak of the performance of scientific theories in terms of different epistemic values, and already in SSR he does speak of a scientist having an initial hunch suggesting a given idea is promising.

From merely observing science's success, we can conclude that there has to be some kind of skill (Yudkowksy's take on this is here and here, among other places) that good scientists employ to do better than chance at picking what to work on.

There is actually a whole part of philosophy of science that deals with this topic, it goes under the name of the preliminary evaluation of scientific theories, their pursuit-worthiness, endorsement, etc.

A good scientist looks where progress could be made within his scientific paradigm

his or her* :)

What Eliezer says about Phlogiston is wrong.

For an excellent recent historical and philosophical study of the Chemical Revolution I recommend Hasok Chang's book "Is Water H2O?", who argues that the phlogistic chemistry was indeed worthy of pursuit at the time when it was abandoned.

already in SSR he does speak of a scientist having an initial hunch suggesting a given idea is promising.

There are certainly many scientists who have hunches that their attempts at revolutionizing science are promising. Most of them however fail.

Right, which is why it's important to distinguish between a mere hunch and a "warranted hunch", the latter being based on certain indicators of promise (e.g. the idea has a potential of explaining novel phenomena, or explaining them better than the currently dominant theory, the inquiry is based on feasible methodology, etc.). These indicators of promise are in no way a guarantee that the idea will work out, but they allow us to distinguish between a sensible novel idea and junk science.

What's feasible and what isn't is hard to say beforehand. If you take molecular biology the mainstream considered their goal at the beginning unfeaseable and it took a while till there was actually technology that made it feasible to know the shape of proteins and other biomoleculs.

There's an interview with Sydney Brenner who was one of the fathers of molecular biology who says that the pradigmn likely wouldn't have gotten support in the current academic climate.

Like I've mentioned, that's why there are indices of theory promise (see .e.g. this paper), which don't guarantee anything, but still make the assessment of some hypotheses more plausible than, say, research done within pseudo-medicine. These indices shouldn't be confused with how the scientific community actually reacts on novel theories since it is no news that sometimes scientists fail to employ the adequate criteria, reacting dogmatically (for some examples, see this case study from the history of earth sciences or this one from the history of medicine). So the fact that the scientific community fails to react in a warranted way to novel ideas doesn't imply that they couldn't do a better job at this. This is precisely why some grants are geared towards highs-risk high-reward schemes, so that projects which are clearly risky and may simply flop, get the funding.

The research in molecular biology was indeed quite tricky, but again, this is no way means that assessing it as not worthy of pursuit would have been a justified response at the time. Hence, it's important to distinguish between the descriptive and the normative dimensions when we speak of the assessment of scientific research.

As for the interview with Sydney Brenner, thanks for linking to it. I disagree though with his assessment of the peer-review system because he's not making an overall comparison between two systems, where we'd have to assess both the positive and the negative effects of the peer-review and then compare that with the positive and negative effects of possible alternative approaches. This means evaluating e.g.: how many crap papers are kept at bay this way, which without the peer-review system would simply get published; how much the lack of prestige or connections with the right people disadvantages one to publish in a journal vs. a blind peer-review procedure which mitigates this problem at least to some extent; how many women or minorities had problems with publication bias vs. the blind peer-review procedure, etc.

Chiropractics was long considered to be pseudo-medicine because it rests on the perceptive ability of it's practioners. Yet, according to Cochrane we now know that their interventions have effects that are comparable to our mainstream treatments for backpain.

The useless paradigm of domestic science had a lot of esteem in the 20th centure while chiropratics had none. Given that it took this long to settle simply question of whether chiropratical intevention works for backpain, I think it's very hard to say for most alternative medicine approaches that have seen a lot less research what effects they have and could be demonstrated if you fund them as a serious paradigm.

In medicine most of the journals endorse the CONSORT guidelines yet their peer-review processes don't make sure that the clear quality standards of the CONSORT guidelines are followed in a majority of published papers.

Blinding peer-review doesn't help at all with encourages paradigm violating papers. Instead of succeding at forcing the quality standards they endorse on the papers they publish, mainstream journals do succeed at not publishing any papers that violate the mainsteam paradigm.

Again: you are conflating the descriptive and the normative. You are all the time giving examples of how science went wrong. And that may have well been the case. What I am saying is that, there are tools to mitigate these problems. In order to challenge my points, you'd have to show that chriopractics did not appear even worthy of pursuit *in view of the criteria I mentioned above* and yet it should have been pursued (I am not familiar with this branch of science, btw, so I don't have enough knowledge to say anything concerning its current status). But even if you could do this, this would be an extremely odd example, so you'd have to come up with a couple of them to make a normatively interesting point. Of course, I'd be happy to hear about that.

The confusion between the desctiptive (how things are) and the normative (how they should be) concerns also your comments on peer review, where you are bringing issues that are problematic in the current medical practice, but I don't see why we should consider them inherent to the peer-review procedure as such. Your points concern the presence of biases in science which make paradigmatic changes difficult, and that may indeed be a problem, but I don't see how abandoning the peer-review procedure is going to solve it.

I agree that some notion of past fruitfulness and further promise is important. It's however hard to judge fruitfulness from the outside as a lot of the progress within a new paradigm might not be intelligible in the old paradigms.

If you would have asked chiropractors in the 20th century whether they made theoretical progress, I would guess that you would get answer about how their theory progressed. If you however asked any mainstream medicine academic you would likely get the answer that they didn't produce anything useful.

The standard peer review is a heavily standardized process that makes specific assumptions about the shape of knowledge.

The ontology of special relations is something that matters for science but I can have that discussion Github. Github does provide for a way of "peer-review" but it's very different then the way traditional scientific papers work.

When I look at that discussion, it's also funny that both the person I'm speaking with and I have both studied bioinformatics.

Bioinformatics as a field managed to share a lot of knowledge openly through ways besides scientific papers. It wouldn't be surprising to me when the DSM gets one day replaced by a well developed ontology created with a more bioinformatical paradigm.

The database that comes out of the money from Zuckerberg will also be likely more scientifically valuable then any classical academic papers written about it.

Te problem of disagreements that arise due to different paradigms or 'schools of thought', which you mention, is an important problem as it concerns the possibility of so-called rational disagreements in science. This paper (published here) makes an attempt at providing a normative framework for such situations, suggesting that if scientists have at least some indications that the claims of their opponent is a result of a rational deliberation, they should epistemically tolerate their ideas, which means: they should treat them as potentially rational, their theory as potentially promising, and as a potential challenge to their own stance.

Of course, the main challenge for epistemic toleration is putting ourselves in the other one's shoes :) Like in the example you mention: if the others are working on an approach that is completely different from mine, it won't be easy for me to agree with everything they say, but that doesn't mean I should equate them with some junk scientists.

As for discussions via Github, that's interesting and probably we could discuss this in a separate thread, on the topic of different forms of scientific interaction. I think that peer-review can also be a useful form of dialogue, specially since a paper may end up going through different rounds of peer-review (sometimes also across different journals, in case it gets rejected in the beginning). However, preprint archives that we have nowadays are also valuable, since even if a paper keeps on being rejected (let's say unfairly, e.g. due to a dogmatic environment in the given discipline), others may still have access to it, cite it, and it may still have an impact.

Ah, this post brings back so many memories of studying philosophy of science in grad school. Great job summarizing Structure

One book that I found very helpful in understanding Kuhn's views in relations to philosophical questions like the objectivity vs mind-dependence of reality is Dynamics of Reason by Michael Friedman. Here Friedman relates Kuhn's ideas both to Kant's notion of categories of the understanding and to Rudolf Carnap's ontological pragmatism. 

The upshot of Friedman's book is the idea of the constitutive a priori which roughly is the notion of a conceptual background understanding that makes certain empirical beliefs intelligible. Unlike Kant's categories, Fridman's constitutive a priori (which is supposed to capture both Kuhn's notion of a paradigm and Carnap's notion of a "language") can change over time. This sounds like it might also have strong resonance for what  Scott calls predictive coding. It would be interesting to explore those connections. However, there is a still a bit of mystery surrounding what happens when we shift, individually or collectively from one constellation of constitutive a priori to an different one (parralel to the question of how scientific communities can shift paradigms in a rational way). Fridman advances the idea of "discursive rationality" to account for how we can make this shift rationally. Basically, to shift between constitutive a prioris, we have to step out of empiricist modes of rationality and adopt a  more hermeneutic/philosophical style. Again, this certainly has echoes in some things Kuhn says about paradigm shifts. 

So in the end I don't think Fridman really solves the problem, but his book does make it much clearer what the nature of the problem really is. It helps by relating Kuhn's specific concepts like paradigm-shift, to the broader history of philosophy from Kant through to the logical positivists. It is pretty striking that the same type of problem emerged for positivists like Carnap, as for avowedly anti-positivists like Kuhn. To me this suggests that it isn't a superficial issue limited to a specific thinker or school, but rather points to something quite deep.