In my last post, I wrote that no resource out there exactly captured my model of epistemology, which is why I wanted to share a half-baked version of it.

But I do have one book which I always recommend to people who want to learn more about epistemology: Inventing Temperature by Hasok Chang.

To be very clear, my recommendation is not just to get the good ideas from this book (of which there are many) from a book review or summary — it’s to actually read the book, the old-school way, one word at a time.

Why? Because this book teaches you the right feel, the right vibe for thinking about epistemology. It punctures the bubble of sterile non-sense that so easily pass for “how science works” in most people’s education, such as the “scientific method”. And it does so by demonstrating how one actually makes progress in epistemology: by thinking, yes, but also by paying close attention to what actually happened.

It works first because the book is steeped in history, here the history of thermometry (the measurement of temperature). By default, beware anything that is only philosophy of science, without any basis in history — this is definitively ungrounded bullshit.

Not only is Chang leveraging history, he also has an advantage over most of the literature in History and Philosophy of Science: early thermometry is truly not that complex technically or mathematically. Except for the last historical chapter, where details of the Carnot cycle get in the way, most of the book describes straightforward questions that anyone can understand, and both experiments and mathematics are at a modern high-school level.

As such, I know that any educated person can read this book, and follow the history part.

Last but not least, thermometry provides a great opportunity to show what happens at the beginning, before all the frames and techniques and epistemic infrastructure is set up.

Another source of oversimplification in people’s models of epistemology (including my own before I started digging into the history) is that we moderns mostly learn well-framed and cleaned up science: when we learn Classical Mechanics, we don’t just learn it as Newton created it, but we benefit from progress in notations, mathematics, and even the whole structure of physics (with the emphasis on energy over forces).

This, I surmise, has the unfortunate consequence of making even practicing scientists feel like science and epistemology is cleaner than it truly is. Sure, we get that data is messy, and that there are many pitfalls, but for many, the foundations have been established before, and so they work in a well-defined setting.

But at the start of thermometry, as in the start of every epistemological enterprise, there was almost nothing to rely on.

For example, if you want to synchronize different temperature measuring devices (not even thermometers yet, because no scale), a natural idea is to find fixed points: phenomena which always happen at the same temperature.

But then… if you don’t even have a thermometer, how can you know that fixed points are actually fixed?

And even if you can do that, what if your tentative fixed points (like the boiling point of water) are not one very specific phenomenon, but a much complex one with multiple phases, over which the temperature does vary?

These are the kind of questions you need to deal with when you start from nothing, and Chang explores the ingenuity of the early thermometricians in teasing imperfect answer out of nature, iterating on them, and then fixing the foundations under their feet. That is, they didn’t think really hard and get everything right before starting, they started anyway, and through various strategies, schemes and tricks, extracted out of nature a decently reliable way to measure temperature[1], operationalizing the concept in the same stroke.

That’s one of the feel I was talking about: the idea that when you don’t have the strong basis of an established discipline (or the wealth of epistemic regularities afforded by classical physics), you need to be much more inventive and adventurous than the modern “scientific method” would let you believe.[2]

Of course, I’m not saying that this book teaches you everything that matters.

First, it’s about physics, and classical physics at that, which means it relies on many epistemic regularities which are just not present for most human endeavors.[3] So it won’t be giving you the right feeling for hunting for epistemic regularities, and noticing the ones that are missing — most of the big ones are there for the thermometry people.

And maybe more important, although the book demonstrates and explores key concepts of epistemology, such as epistemic iteration, it doesn’t try to provide a method for epistemology in general, adapted to various circumstances and contexts.

For that goal, I know no alternative to reading widely and thinking deeply to come up with your own model (not quite here yet myself, but moving in this direction).

But to bootstrap your epistemological journey, Inventing Temperature is definitely a great choice.

  1. ^

    It’s reliable, but it’s not simple. If you want some fun today, go read about the 14 fixed points used for the modern ITS-90 scale, as well as the various polynomials of degree 9, 12, 15 that are used in interpolating between these.

  2. ^

    Another great reference on this is Rock, Bone, and Ruin by Adrian Currie, which focuses on historical sciences (evolutionary biology, archeology, geology…), and the methodological omnivore regime that these require.

  3. ^

    See my model of epistemology for more intuitions and details on why that is.

New Comment
18 comments, sorted by Click to highlight new comments since:

I liked Thermodynamic Weirdness for similar reasons. It does the best job of books I've found at describing case studies of conceptual progress—i.e., what the initial prevailing conceptualizations were, and how/why scientists realized they could be improved.

It's rare that books describe such processes well, I suspect partly because it's so wildly harder to generate scientific ideas than to understand them, that they tend to strike people as almost blindingly obvious in retrospect. For example, I think it's often pretty difficult for people familiar with evolution to understand why it would have taken Darwin years to realize that organisms that reproduce more influence descendants more, or why it was so hard for thermodynamicists to realize they should demarcate entropy from heat, etc. Weirdness helped make this more intuitive for me, which I appreciate.

(I tentatively think Energy, Force and Matter will end up being my second-favorite conceptual history, but I haven't finished yet so not confident).

It's rare that books describe such processes well, I suspect partly because it's so wildly harder to generate scientific ideas than to understand them, that they tend to strike people as almost blindingly obvious in retrospect.

Completely agreed!

I think this is also what makes great history of science so hard: you need to unlearn most of the modern insights and intuitions that didn't exist at the time, and see as close as possible to what the historical actors saw.

This makes me think of a great quote from World of Flows, a history of hydrodynamics:

There is, however, a puzzling contrast between the conciseness and ease of the modern treatment of [wave equations], and the long, difficult struggles of nineteenth-century physicists with them. For example, a modern reader of Poisson's old memoir on waves finds a bewildering accumulation of complex calculations where he would expect some rather elementary analysis. The reason for this difference is not any weakness of early nineteenth-century mathematicians, but our overestimation of the physico-mathematical tools that were available in their times. It would seem, for instance, that all that Poisson needed to solve his particular wave problem was Fourier analysis, which Joseph Fourier had introduced a few years earlier. In reality, Poisson only knew a raw, algebraic version of Fourier analysis, whereas modern physicists have unconsciously assimilated a physically 'dressed' Fourier analysis, replete with metaphors and intuitions borrowed from the concrete wave phenomena of optics, acoustics, and hydrodynamics. 

(Also, thanks for the recommendations, will look at them! The response to this post makes me want to write a post about my favorite books on epistemology and science beyond Inventing Temperature ^^)

I strongly endorse you writing that post!

Detailed histories of field development in math or science are case studies in deconfusion. I feel like we have very little of this in our conversation on the site relative to the individual researcher perspective (like Hamming’s You & Your Research) or an institutional focus (like Bell Labs).

If you enjoyed Inventing Temperature, Is Water H2O? is pretty much the same genre from the same author.

My another favorite is The Emergence of Probability by Ian Hacking. It gets you feeling of how unimaginably difficult for early pioneers of probability theory to make any advance whatsoever, as well as how powerful even small advances actually are, like by enabling annuity.

I actually learned the same thing from studying early history of logic (Boole, Peirce, Frege, etc), but I am not aware of good distillation in book form. It is my pet peeve that people don't (maybe can't) appreciate how great intellectual achievement first order logic really is, being the end result of so much frustrating effort. Because learning to use first order logic is kind of trivial, compared to inventing it.

If you enjoyed Inventing Temperature, Is Water H2O? is pretty much the same genre from the same author.

Yeah, I am a big fan of Is Water H2O? (and the other Chang books). It's just that I find Is Water H2O? both less accessible (bit more focused on theory) and more controversial (notably in its treatement of phlogiston, which I agree with, but most people including here have only heard off phlogiston from fake histories written by scientists embellishing the histories of their fields (and Lavoisierian propaganda of course)). So that's why I find Inventing Temperature easier to recommend as a first book.

My another favorite is The Emergence of Probability by Ian Hacking. It gets you feeling of how unimaginably difficult for early pioneers of probability theory to make any advance whatsoever, as well as how powerful even small advances actually are, like by enabling annuity.

It's in my anti-library, but haven't read it yet.

It is my pet peeve that people don't (maybe can't) appreciate how great intellectual achievement first order logic really is, being the end result of so much frustrating effort. Because learning to use first order logic is kind of trivial, compared to inventing it.

I haven't read it in a while, but I remember The Great Formal Machinery Works being quite good on this topic.

Apparently people want some clarification on what I mean by anti-library. It's a Nassim Taleb term which refers to books you own but haven't read, whose main value is to remind you and keep in mind what you don't know and where to find it if you want to expand that knowledge.

It's not a full conceptual history, but fwiw Boole does give a decent account of his own process and frustrations in the preface and first chapter of his book.

[-]Raemon12-1

Curated. I've heard this book suggested a few times over the years, and feels like it's a sort of unofficial canon among people studying how preparadigmatic science happens. This review finally compelled me to get the book. 

I have not yet read the book, so can't comment on whether it lives up to the hype, but, I find the topic of "what are the messy bits of founding a new area of science" one of the most important topics we have to figure out. It seems like an area where actually stewing on the details (as opposed to just reading a summary) would be important. But I'm glad to have a good reference pointer that explains why giving the book a read matters.

I do think this review would be a lot better if it actually distilled the messy-bits-that-you-need-to-experientially-stew-over into a something that was (probably) much longer than this post, but, much shorter than the book. But that does seem legitimately hard.

I think the preparadigmatic science frame has been overrated by this community compared to case studies of complex engineering like the Apollo program. But I do think it will be increasingly useful as we continue to develop capability evals, and even more so as we become able to usefully measure and iterate on agency, misalignment, control, and other qualities crucial to the value of the future.

That’s very interesting - could you talk a bit more about that? I have a guess about why, but would rather hear it straight than risk poisoning the context.

Why I think it's overrated? I basically have five reasons:

  1. Thomas Kuhn's ideas are not universally accepted and don't have clear empirical support apart from the case studies in the book. Someone could change my mind about this by showing me a study operationalizing "paradigm", "normal science", etc. and using data since the 1960s to either support or improve Kuhn's original ideas.
  2. Terms like "preparadigmatic" often cause misunderstanding or miscommunication here.
  3. AI safety has the goal of producing a particular artifact, a superintelligence that's good for humanity. Much of Kuhn's writing relates to scientific fields motivated by discovery, like physics, where people can be in complete disagreement about ends (what progress means, what it means to explain something, etc) without shared frames. But in AI safety we agree much more about ends and are confused about means.
  4. In physics you are very often able to discover some concept like 'temperature' such that the world follows very simple, elegant laws in terms of that concept, and Occam's razor carries you far, perhaps after you do some difficult math. ML is already very empirical and I would expect agents to be hard to predict and complex, so I'd guess that future theories of agents will not be as elegant as physics, more like biology. This means that more of the work will happen after we mostly understand what's going on at a high level-- and so researchers know how to communicate-- but don't know the exact mechanisms and so can't get the properties we want.
  5. Until now we haven't had artificial agents to study, so we don't have the tools to start developing theories of agency, alignment, etc. that make testable predictions. We do have somewhat capable AIs though, which has allowed AI interpretability to get off the ground, so I think the Kuhnian view is more applicable to interpretability than a different area of alignment or alignment as a whole.

Dunno if this is a complete answer but Thomas Kwa had a shortform awhile back arguing against at least some uses of "preparadigmatic"

https://www.lesswrong.com/posts/Zr37dY5YPRT6s56jY/thomas-kwa-s-shortform?commentId=mpEfpinZi2wH8H3Hb 

Curated. I've heard this book suggested a few times over the years, and feels like it's a sort of unofficial canon among people studying how preparadigmatic science happens. This review finally compelled me to get the book. 

There's something quite funny in that I discovered this book in January 2022, during the couple of days I spent at Lightcone offices. It was in someone's office, and I was curious about it. Now, we're back full circle. ^^

I do think this review would be a lot better if it actually distilled the messy-bits-that-you-need-to-experientially-stew-over into a something that was (probably) much longer than this post, but, much shorter than the book. But that does seem legitimately hard.

Agreed.

But as I said in the post, I think it's much more important to get the feel from this book than just the big ideas. I believe that there's a way to write a really good blog post that shares that feel and compresses it, but that was not what I had the intention or energy (or mastery) to write.

It sounds cool, though also intuitively temperature seems like one of the easiest attributes to measure because literally everything is kind of a thermometer in the sense that everything equillibrates in temperature. My prior mental image on inventing temperature is iteratively finding things that more and more consistently/cleanly reflects this universal equillibration tendency.

Is this in accordance with how the book describes it, or would I be surprised when reading it? Like of course I'd expect some thermodynamic principles and distinctions to be developed along the way, but it seems conceptually very different from e.g. measuring neural networks where stuff is much more qualitatively distinct.

It sounds cool, though also intuitively temperature seems like one of the easiest attributes to measure because literally everything is kind of a thermometer in the sense that everything equillibrates in temperature.

Can't guarantee that you would benefit from it, but this sentence makes me think you have a much cleaner and simplified idea of how one develops even simple measuring device than what the history shows (especially when you don't have any good theory of temperature or thermodynamics).

So would say you might benefit from reading it. ;)

His other books are also great.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

While it may be nearly impossible to experience (rather than read) the process of scientific discovery in most cases, there are a few possibilities. The Pythagorean Theorem, for example, was probably the result of thousands of years of trying to find a general relationship between the diagonal and sides of rectangles. It is only now that we view it as the equivalent relationship between the legs and hypotenuses of right triangles. However, the people of ancient Iraq were most likely solving puzzles to determine the side of a square from its area by building the square on its diagonal and dissecting it in various ways. Eventually, this led to the triples they recorded in clay, but this was far from envisioning the modern version of the Pythagorean Theorem, which is a statement about lengths of sides of right triangles rather than areas of squares built from diagonals of rectangles. All of this to say that the rediscovery of the Pythagorean Theorem from trying to dissect and reassemble 2 square units, three, five, etc (and duplicates of those), into a single square, is not only a good way to improve spatial reasoning, but also maybe one way to experience scientific discovery as recreation without daunting investments of time and material.