Review

N.B. This is a chapter in a planned book about epistemology. Chapters are not necessarily released in order. If you read this, the most helpful comments would be on things you found confusing, things you felt were missing, threads that were hard to follow or seemed irrelevant, and otherwise mid to high level feedback about the content. When I publish I'll have an editor help me clean up the text further.

It's 1 a.m.. You're arguing with a stranger on the internet about the meaning of the word "is". How did you get here?

The night started out innocently enough. You were settling down for the evening by checking your favorite internet forum. You read an interesting post that you agreed with, but it contained a minor error. You posted a comment offering a correction. You were about to sign off when you got a notification that the author of the post had replied.

They said, no, you were wrong and an idiot. Normally you'd let it slide, but tonight you're not having it. You fired off a confrontational reply; they sent one back in kind; you escalated in your response; they did the same. After a couple hours you'd exchanged thousands of increasingly heated words that took you away from the original point and into a battle over worldviews. And at this late hour, bleary eyed from staring at the screen for so long, you hit upon the fundamental question separating you:

"How do you know that's true?"

Turns out it's not so easy a question to answer. You say your claims are obvious; they disagree. You cite articles that justify your points; they say those aren't trustworthy sources. You ask them to offer alternative evidence; they provide "evidence" that is little more than opinion. Pretty soon you're debating what truth really mean. You're both convinced you're right, but you can't come to any agreement.

Congratulations! You've run headlong into epistemology—the study of knowledge. Epistemology is the branch of philosophy concerned with how we figure out what's true. We use it to determine what arguments are correct, what evidence to trust, and where we meet the limits of our knowledge. It's also something we use all the time whether we realize it or not. Even if we don't have to think about it often, our lives are filled with acts of applied epistemology.

For example, how do you know that eating a sandwich will satisfy your midday hunger? Sure, you ate a sandwich for lunch yesterday and afterwards you were no longer hungry, but how do you know that today's sandwich will have the same effect? It's not literally the same sandwich made from the same molecules, so maybe it will leave you starving. Yet somehow you infer that today's sandwich has the same hunger satisfying properties as yesterday's. Epistemology is the means by which you perform that inference.

If you're like most people, you've probably never explicitly thought about how you know that one sandwich is much like another. That's normal, because sandwich epistemology is obvious to us. We're natural experts at finding the truth when it's relevant to our survival, like knowing what foods we'll find filling. Our epistemological skills get shakier when we try to assess more abstract claims, like those we might encounter in late-night internet arguments, and it's in such contexts that we benefit from a more thorough understanding of epistemology.

But developing that understanding is hard. Epistemology doesn't make things easy for us: it demands we have a good grasp of logic, familiarity with symbolic reasoning, and strong critical thinking skills. Epistemology is also poorly taught, assuming it's taught at all. I attended the rare high school that offered a class in epistemology, and taking it did more to confuse me about the truth than nearly anything else. More commonly we offer no formal education in epistemology and expect people to figure out how to find truth on their own.

What we do teach are classes on a lot of specific topics that take epistemology for granted. We teach science, but spend little time considering how science is designed to find truth. We teach math, but rarely consider how math is grounded in reality. We teach debate, but often only concern ourselves with "truth" so long as it's helping us win arguments. Given such an educational environment, it's not surprising that the average person knows relatively little about epistemology.

Consequently, I think it's fair to say that many people in the world are epistemologically naive in the sense that they rely on a simple, intuitive theory of truth: it's obvious what's true and what's false. Surprisingly, this works pretty well, because something can pass for obvious if no one disagrees, and everyone agrees that statements like "sandwiches satisfy hunger" are true and statements like "cats are made of cheese" are false. Treating truth as obvious works well for such concrete, everyday claims.

But the naive theory starts to break down when there's disagreement. Is gossip about a friend true? One friend says yes, another no. Who's in the right when spouses argue? They both claim they are right and the other is wrong. And how are contentious dinner table conversations about politics and religion to be resolved? Every diner has a different opinion, and there's often little common ground upon which to build consensus.

Can the naive theory be saved? Only if you're willing to say that you're smart and right while everyone who disagrees with you is stupid and wrong. As soon as you take another person's arguments seriously, you have to consider the possibility that they're right and you're wrong. Maybe one of you is stupid, but more likely one or both of you made a mistake in your reasoning. Accounting for reasoning suggests a more complex theory of truth where logical argument can be used to determine who, if anyone, is correct.

Epistemology that relies on logic holds up quite a bit better than the naive theory. That's because it treats epistemology like a branch of mathematics. Specifically, it starts by assuming that the rules of logic and direct observations are true. Then, any claim can be evaluated true or false by seeing if the claim can be derived from a combination of observed facts and valid logic. If mathematical epistemology seems familiar, it should. It's the way we approach truth finding in science, engineering, law, and medicine, and it's the standard we—at least nominally—hold public intellectuals and politicians to.

The mathematical theory works really well. So well that, broadly speaking, it's the standard theory of truth. Unfortunately, it has flaws. Not as severe of flaws as the naive theory has, to be sure, but flaws that nonetheless undermine its reliability. Let's consider three of these flaws in increasing order of severity.

First and least threatening, simple versions of the mathematical theory fail to adequately account for observation error. If observations are true by assumption, this would seem to imply that we have to accept as fact reports of Bigfoot sightings, alien abductions, and demonic possessions. But observers can make mistakes in interpreting raw sensory information, like when a kid in a dark room imagines a pile of clothes is a monster. Luckily such errors are easily accounted for. A popular solution is to assign probabilities to observations to measure how likely they are to be true. This makes the mathematical theory of epistemology more complex, but in doing so better captures the nuances of finding truth in the real world.

The next flaw is more serious. The mathematical theory, with its assumption of logic, relies on being self-consistent, since if the mathematical theory were not consistent with itself it would allow you to prove that true statements are false and vice versa. Normally consistency is desirably, but, as Kurt Gödel showed with his incompleteness theorems, a mathematical system that is consistent cannot be complete, which means that a mathematical approach to epistemology can only tell us if some statements are true or false, not all. This is a surprisingly tolerable limitation, though, since most of the statements the theory can't tell us about are only relevant when doing advanced mathematics or solving esoteric logic puzzles. So even though we can't solve this flaw like we could the first one, we can mostly ignore it because it rarely matters in practice.

Alas, the third flaw is inescapable. Recall that the mathematical theory makes some assumptions. Specifically, it assumes, without justification, that the means by which truth can be assessed—logic and direct observation—are true. But why assume they're true rather than demonstrate that they are? Consider, how would you prove they're true? You'd have to show they're true using the self-same means of assessing truth. This sets up an infinite regress of justifications, like two mirrors endlessly reflecting back on each other. The only way to escape this endless loop of self-referential logic is to make one or more assumptions without justification to ground the rest of our reasoning.

The consequence of assuming that the means of assessing truth are themselves true without justification is that we have no way to be certain that any chain of reasoning that depends upon them is correct. That is, anything we think is true may be wrong if it turns out our assumptions are mistaken, and worse yet we may have no way of knowing if we've made such a mistake because we're reliant on the means of assessing truth to notice when we are wrong. This creates a blind spot at the heart of mathematical epistemology, making our knowledge of the truth fundamentally uncertain.

A couple questions immediately come to mind. First, does this fundamental uncertainty matter? Science, for example, is better at explaining and predicting the world than anything else we've tried, and it depends on mathematical epistemology. So somehow, despite the theoretical problem of fundamental uncertainty, science manages to find truth anyway. If fundamental uncertainty isn't enough to stop science from finding truth, maybe we can ignore it?

Yet there are times when fundamental uncertainty blocks us from finding truth. When we get into debates over the meanings of words, it's not because we can't find their true meanings, but because fundamental uncertainty prevents total resolution of all disagreements. When we fight over what's right and wrong, it's not because some of us are evil, but because fundamental uncertainty limits how deeply we can justify morality. And when we struggle to know what's best to do in a given situation, it might be because we don't have enough information, but fundamental uncertainty also keeps us from being sure how things will work out. No matter how smart or wise we are, fundamental uncertainty ultimately stands in our way. 

Second, is there any way to work around fundamental uncertainty? No, it exists as a natural consequence of the pursuit of truth. But, as I'll show through the course of the book, that's okay. Even if truth is uncertain, it can still be useful for most practical purposes, and this is good enough, because it has to be, for we have no choice but to find a way to live with fundamental uncertainty.

Over the coming chapters we'll explore the fundamental uncertainty of truth from many angles. We'll start by looking in depth at some of the places where fundamental uncertainty is most obviously present in our lives. Then we'll explore the mechanics of fundamental uncertainty and give an account of how truth works despite its limitations. Finally, we'll consider some further consequences of fundamental uncertainty and how we can learn to live in a world where the truth can never be definitively known.

But before we embark on this voyage into uncertainty, a few words of advice, both to help us weather the intellectual storms we will face and to make sure we're prepared for what lies ahead.

It All Adds Up To Normality

Have you ever had an experience like this? You think you understand something, like why you love your favorite soup. You think you love it because it contains oregano, your favorite herb. Then one day you find out there's no oregano in the soup, and actually it's made with your least favorite herb, fennel. Yesterday you were an oregano-loving, fennel-hating, soup connoisseur, and now you're a crypto fennel fanatic. And it's not just that. Given you were wrong about disliking fennel, you could be wrong about any number of other things. Are you sure you don't like mushrooms and cauliflower? Do you really love your mom's cooking, or is it just that she sneaks in some fennel without telling you? For that matter, how do you know you really love your mom? Ahhhhh!

Perhaps questioning love for your mom is over the top, but we regularly get ourselves into situations like this over political and intellectual opinions. Many people have had their entire worldview turned upside down because they read a book like Karl Marx's Das Kapital, Ayn Rand's Atlas Shrugged, Friedrich Nietzsche's Beyond Good and Evil, or Richard Dawkins' The Selfish Gene. Discovering a big idea for the first time can be disorienting and leave you questioning everything. When that happens to me, I try to remember the words of science fiction author Greg Egan: it all adds up to normality.

What does Egan mean? He's saying, in part, that the way the world works is independent of our understanding of it. Whether you believe the Sun is a giant ball of burning gas, a god traveling in a flaming chariot across the sky, or a glowing egg laid each morning by a cosmic chicken, your belief has no effect on what the Sun does or what sensory input it will create for you. If you thought the Sun was a god in a chariot and then learn it's a ball of gas, you don't need to worry that it's going to fail to rise and set because it's no longer being pulled across the sky. The Sun will keep moving the same as it always has regardless of what you think you know about it.

This is an obvious enough point, but it's easy to lose sight of. We're going to cover some fairly deep topics, and there's a good chance at least once while reading this book you'll realize that the world is not how it seemed just a few pages before. When this happens, keep in mind that nothing has changed other than your understanding. The world is just as it always was, other than that the tiny corner of it that contains your brain now has some new information in it. The world may seem different from your perspective, but it's still the same world you've always been in, even if it looks completely new.

That things add up to normality also has implications for how we go about understanding the world, not just how we relate to our changing understanding. Importantly, we must take into account whatever we find as we look at the world, even if it's inconvenient for us. For example, life would be a lot easier if truth were not fundamental uncertainty. Then all we'd have to do to know the truth is observe carefully and reason logically. In fact, this is exactly what philosophers hoped would be possible in the late 19th and early 20th centuries. Yet, once fundamental uncertainty was seen it could not be unseen, and philosophers had to adapt their theories of truth to account for its existence. We must do the same.

With that in mind, we're almost ready to begin, but first, if you'll indulge me, a few notes on who I think this book is for and how I think you can get the most out of reading it.

Setting Expectations

Every book has a target audience. Mine is people in STEM—science, technology, engineering, and mathematics. Why? For one, this is my own background, and thus I have an intuitive sense of what my fellow STEMites want to know about truth and epistemology. I wrote this book because I used to be confused about how truth works, but over the past decade of my life I've painstakingly sorted out my confusions. In many ways I've written the book I wish I could have read 10 or 15 years ago, and my hope is that it will help STEM folks becomes less confused about the nature of truth.

In addition to writing for a STEM audience, I'm also going to assume that you, my reader, are curious and will read more on your own when you encounter ideas that you don't understand. This book would be 5 times longer and might never have been finished if I fully explained every idea I touch upon. Luckily, I'm writing in the age of the internet when detailed information on nearly every topic is readily available with a quick search. So I've chosen to write relatively densely with the idea that if I'm too dense in places you'll correct my mistake by doing your own reading.

On a similar note, because I've kept this book short and dense, I've not given you a lot of space to think as you read. One of the virtues of long books is they give you a lot of time to mull over their ideas as you slog through their pages. I've given you little room to mull, so I think you'd be well served to read this book slowly with breaks. If I were reading this book myself, I'd likely set a pace of one chapter a week and spend some time between readings following up on the ideas in the chapter I had just read. Feel free to find what works for you, but be advised this is not a typical non-fiction book with lots of filler.

Finally, although there's no strict list of required reading to understand this book, I think you'll have an easier time reading it if you've read a few other books first and drunk in their ideas. There are three books in particular that I think are well worth reading if you have not done so already. 

The first of these books is Douglas Hofstadter's Gödel, Escher, Bach: an Eternal Golden Braid. Of every book I have ever read in my life, this one has had the most profound impact on my thinking. GEB, as it is often abbreviated, explores how math, philosophy, and art intersect and shows you how to apply precise, logical thinking to gain deeper insight into seemingly mysterious topics like consciousness. It's from GEB that I caught my first glimpse of fundamental uncertainty, and if I had never read GEB there's an excellent chance I would never have investigated the ideas that ultimately led to writing this book.

The second book on my list is Julia Galef's The Scout Mindset. It's a book about heuristics and biases, how they enable us to think quickly and accurately in a handful of critical situations, and how they cloud our judgement the rest of the time. Galef spends the book's pages emphasizing the importance of curiosity, championing the value of probabilistic reasoning, and explaining how to quickly change your mind when you encounter new evidence and arguments. The thinking skills she teaches are a boon to anyone who wants to learn quickly and understand deeply rather than get hung up on debates that go nowhere.

My third and final recommendation is Eliezer Yudkowsky's Rationality: From AI to Zombies. It covers similar topics to Galef's book, but in much greater breadth and depth. Many ideas that I mention with a few sentences or paragraphs are given multi-chapter treatments in Rationality, so it's a great resource for learning more about ideas I build arguments upon but don't take time to explore completely. Be warned, this book is long, and it may take months or years to fully absorb it, but I consider it an investment that yields tremendous returns.

I'll also recommend additional books in each chapter to explore key ideas in further detail. Consider them my personal recommendations to learn more if you're interested in a topic, but also feel free to disregard them if you think another book would be better. And if you don't have the time or inclination to read another book, you can find useful information on Wikipedia (good for quick overviews), the Stanford Encyclopedia of Philosophy (for detailed academic summaries of many philosophy topics), and Less Wrong, a group blog where I and others write about ideas like those in this book.

I hope that all these expectations and suggested readings aren't too daunting, but if you're going to spend your valuable time reading this book rather than doing any of millions of other things, you deserve to know what you're getting into. I've set high expectations for my readers because the world demands a lot of us. We're facing existential threats from economic instability, climate change, genetically engineered diseases, and advanced artificial intelligence. From where we stand, it's unclear if humanity, or even all life on Earth, will survive the 21st century. To navigate ourselves into the future safely we must masterfully steer ourselves through the whirlpools and eddies of uncertainty, and to do that we must understand whence this uncertainty fundamentally arises.

New Comment
4 comments, sorted by Click to highlight new comments since:

I'm not sure the motivating story - internet debate - is the best example for epistemology. Maybe you should rule out the interpretation of a fight with words where the truth is secondary or only superficially the point. Or maybe you want to include that reading, but then you may want to spend a few words differentiating these cases.

If you thought the Sun was a god in a chariot and then learn it's a ball of gas, you don't need to worry that it's going to fail to rise and set because it's no longer being pulled across the sky. The Sun will keep moving the same as it always has regardless of what you think you know about it.

It is certainly true that the sun is going to do what it was going to do. But your prediction of what it will do depends on your model. If your model is a god's chariot than your model will predict a higher chance of it changing its course and thus you may act differently.

I think you should spend some more words in normalcy. You use it in the tautological sense of things are what they are in the territory. But for people the perception of normalcy may change. After all, normal is what is expected and when models change predictions change. It takes a while to get around to the litany of Gendlin.

Sorry if this is not the feedback you want.

Epistemology is the branch of philosophy concerned with how we figure out what's true.

That's... not quite what epistemology is? It is a study of how humans construct, test and improve their models? Focused on accuracy and domain of applicability, not specifically on "truth".

I don't exactly disagree, but your way of talking about epistemology presupposes a certain understanding or framing of what truth and knowledge are and how they work, whereas the field generally is just about whatever it is that humans call by the names truth and knowledge, and it is the project of the field to figure out what they really mean.