But red exists only because we have the experience of seeing red. That is, red exists because we have found it useful to tell red apart from other colors. We can "objectively" define red to be light with a wavelength between 620 and 750 nanometers, but we define it thus because those wavelengths correspond to what many people subjectively identify as red. Thus, whether or not an apple is red is neither a properly objective nor subjective fact, but intersubjective knowledge that depends on both the world and how we experience it. So it goes for all truth that can be known.
At first glance, red seems like such a special color to me. It's the color of blood and many fruit, the advanced color that primates and flying animals see and other animals can't distinguish, it's the most primitive step towards heat vision, and it's the color at the lowest end of the range of common electron bandgap. Obviously the word itself is kind of arbitrary but the color seems as non-arbitrary as could be.
Yes, red is perhaps the most useful to color to be able to see! That's why I chose to use it in this example.
N.B. This is a chapter in a book about truth and knowledge. It is the first draft. I have since revised it. You can find the most up-to-date info/version on the book's website.
As we've explored in the preceding chapters, knowledge of the truth is fundamentally uncertain because it's ultimately grounded, not in itself or logic or observation, but in what we care about. And yet, truth appears to have a a solid foundation because we all care—to greater and lesser extents—about accurately predicting reality so that we can make it the way we want it to be. Thus, the normality of truth is preserved.
Normality not withstanding, the far reaching implications of the fundamentally uncertain nature of truth are subtle. They can defy intuitions and complicate that which seemed simple. Therefore, in this final chapter, I'll attempt to clarify three topics that many people find confusing when developing a deep understanding of fundamental uncertainty.
The Intersubjective Truth
There are, broadly speaking, two school of thought on the nature of truth. One says that truth is objective and exists independent of our knowledge of it. The other says that truth is subjective and always contingent on a mind doing the knowing. Which is right?
Objectivity has much to recommend it. The success of mathematics and science is largely due to them giving us tools for finding truth regardless of what any individual thinks. To wit, whether or not two plus two equals four or the speed of light in a vacuum is 299,792,458 meters per second seem unaffected by anything going on in our minds. If we one day meet aliens from another world, we expect their math and science to proclaim the same truths ours does. The universe, as best we can tell, exists independent of us, and what we think about it has no bearing on its truths.
But we need only know a little about epistemology to see the cracks in objectivity. As we discussed in Chapter 6, the truth that we know is the relative truth, not the absolute. Thus even if the absolute truth is objective, all we know is the relative truth, subjectively mediated by our senses and ability to reason. To say that truth is objective is to make a metaphysical claim we can't prove because our subjective observations don't allow us to know the nature of truth directly. We can at best infer that objectivity is consistent with our subjective experiences of truth, but nothing more.
Yet to say that truth is subjective is to leave something out. As we explored in Chapter 7, we care about accurately predicting our observations of the world, so our knowledge of the truth needs to be constrained by the world of our experience. It's not enough to say that truth is subjective, because that implies the truth is arbitrary, which it is not.
Instead we should say that truth is intersubjective, arising at the intersection of ourselves and the world as we find it. Or put more concretely, we create truth to help us get what we want, what we want is found in the world, and so the truth we create must reflect our interactions with the world in order to be useful. To clarify what intersubjectivity means, let's consider an example.
Most of us have the subjective experience of seeing the color red. We talk to each other and get confirmation that, apart from those of us who are blind or colorblind, we all agree on which things are red and which things are not. Further, when we see a new object for the first time, if it looks like it's red, we can have high confidence that others will also believe it to be red. And further still, we can build machines that measure the light reflected off objects to detect which ones are actually red, even if we ourselves can't see the object or what color it is. Thus, it seems as though redness should be an objective, observer-independent phenomenon.
But red exists only because we have the experience of seeing red. That is, red exists because we have found it useful to tell red apart from other colors. We can "objectively" define red to be light with a wavelength between 620 and 750 nanometers, but we define it thus because those wavelengths correspond to what many people subjectively identify as red. Thus, whether or not an apple is red is neither a properly objective nor subjective fact, but intersubjective knowledge that depends on both the world and how we experience it. So it goes for all truth that can be known.
Furthermore, intersubjectivity is not restricted to truth. Anything that can be known is known through the interaction of our minds and the worlds they find themselves in. Here are a few more examples to help you get a better sense for what this means.
As I hope these examples make clear, intersubjectivity is a deeply normal description of how we know the world. There's no trick I'm trying to pull when I say that truth isn't objective or subjective. Intersubjectivity neither invalidates anyone's experience nor allows claiming that arbitrary statements are true. I could just as easily say that truth is both objective and subjective because we still have all the evidence in favor of both interpretations. The only trick, if we even want to consider it one, is in reconciling the seeming contradiction between objectivity and subjectivity, and it is found by acknowledging that truth often looks objective because our subjective experiences agree even as that agreement remains fundamentally uncertain.
Arguing from Uncertainty
One way we sort out what's true is by debating claims. In such debates—whether they be formal, like the ones students and politicians engage in, or informal, such as those that happen at dinner parties and online—participants state beliefs and give arguments in their favor. Then, others analyze those arguments and attempt to refute them if they disagree. In the ideal case, the participants go back and forth, iteratively putting their claims to the test, until everyone agrees on what is true and what is not.
Unfortunately, debates are rarely ideal. They frequently get off track when someone uses motivated reasoning, personal attacks, or logical fallacies because they prioritize winning over finding truth. Malicious debate techniques are well understood and are the subject of most guidance for how to have successful debates. But there's another, less understood, and often ignored way in which debates can miss their mark: debates can be derailed by epistemological disputes.
We get ourselves into epistemological disputes because debates sometimes hinge on how claims are known. We'd like to be certain that the justifications given for claims are true, but we can't be because knowledge is fundamentally uncertain. Realizing this, we may be drawn to explore fundamental uncertainty in a debate to answer epistemological questions, but doing so is only helpful if done skillfully. If we are insufficiently judicious in deciding when fundamental uncertainty will improve rather than detract from a discussion, we'll find ourselves hopelessly sidetracked, lost in fruitless tangents like the ones that befell the online argument that opened Chapter 1.
One of the easiest mistakes to make when a debate turns towards epistemology is trying to wield fundamental uncertainty as a weapon. Suppose you are in a debate and think your opponent is making unjustified assumptions. You might be tempted to use the Problem of the Criterion to prove them wrong. Don't! If you make the argument that they can't know their assumptions are true because no one can know the criterion of truth, they will rightly point out that you are just as guilty of holding unjustified assumptions as they are because you don't know the criterion of truth either. Bringing up the Problem of the Criterion only helps when arguing against a specific assumption, like logical positivism, that it directly disproves. Otherwise, it's best to leave it alone.
What works better is to engage with a debater's assumptions directly. If a person's arguments depend on an assumption you disagree with, get curious about why they think it's true. Try to understand why they believe what they believe. In the ideal case, you'll be able to grasp their justification for an assumption so well that you could state it back to them and they'd agree with you. That way, if you spot an error, you'll have a thorough understanding of their thinking and know how to convince them of their mistake. And if you can't find any errors, you may find that you were the one whose assumptions were mistaken.
Another easy mistake to make is letting yourself get caught up in disagreements over definitions. These happen because, as we discussed in Chapter 2, the meanings of most words are ostensive, based on examples of how they are used. And since no two people have exactly the same set of examples in their heads when they think about what a word means, exact word meanings often differ slightly from one person to the next.
One way to agree on definitions is by creating jargon using intensional definitions. This is how professionals like mathematicians, scientists, engineers, doctors, and lawyers manage to communicate clearly with each other: by using specialized technical language that explicitly ignores implied meanings. Unfortunately, as you may recall from Chapter 8, arguments about definitions are sometimes proxy fights over values, like whether transgender people are really "men" or "women", and in such cases, jargon is unlikely to help. That's because jargon makes communication more precise, but that precision is only helpful to the extent people agree about the world they are trying to describe. When disagreements are large, as is often the case with values, jargon does more to obscure than clarify.
What works better is to taboo (ban) contentious words and explain what is meant in more detail. The idea comes by analogy to the party game Taboo. If you've never played, it's like charades, but instead of mime, you use words. For example, if on my turn I drew the word "ball", I might say "round thing that bounces" to get my partner to guess it, and I wouldn't be allowed to say "ball" itself or "orb" or "sphere" or any other synonyms. Applied to debates, tabooing words forces people to explain their claims instead of relying on words with ambiguous meanings like "consciousness", "intelligence", "healthy", and "fair". Tabooing words won't always make it clear what someone means, but it will at least remove confusion stemming from the use of vague terms.
Tabooing words can also make clear when a disagreement is actually about values, not definitions. Unfortunately knowing a disagreement is about values is not sufficient to resolve it. That's because the way we know the world is contingent on the things we care about, as we explored in Chapter 7, and the things we care about are deeply rooted and difficult to change, like our moral foundations that we discussed in Chapter 3. Steadfast differences in values prevent debaters from the finding common ground needed to make arguments that can convince each other. Thus debates that turn out to be debates about values often end by agreeing to disagree.
Agreeing to disagree may seem like a disappointing debate outcome, especially if you were hoping to resolve a matter of fact, but it's a better end than most debates get. More often debates end in disagreement simply because the debaters lacked the rhetorical skills needed to give convincing arguments for their actually true claims, or they couldn't locate sources of already established evidence that would have proved their valid points. But there's one more reason debates go unresolved, and it's perhaps the most common and the least noble one. When faced with the choice between winning a debate with a fallacious argument or changing one's mind to believe what's true, most people will choose winning over truth.
Why would anyone do this, especially if they profess a commitment to truth? Because no matter how much someone loves the truth, it hurts them more to lose. Admitting to being wrong can bring up feelings of shame, embarrassment, and even anger for having believed something false. Rather than suffer these feelings, we may employ coping strategies to avoid changing our minds, like doubling down on bad arguments, ignoring faulty evidence, or convincing ourselves that we're smarter than everyone else. Unfortunately, these strategies make our models of the world worse, not better, and despite what we tell ourselves, they also make us look more foolish than if we simply owned up to our past errors in reckoning.
To the extent I have become a more humble debater who is more willing to change his mind, it is in large part thanks to Eugene Gendlin's book Focusing. In it, he describes a simple process for connecting with the felt sense of our emotions. The process involves paying close attention to the bodily sensations that arise in a situation like losing an argument, putting a name on those literal feelings, and then engaging with those feelings, even if they are painful or uncomfortable, rather than ignoring them. To use myself as an example, in the past I would suppress my "irrational" and "unhelpful" negative feelings when I was proven wrong, but now, with the help of Focusing, if they come up I can let them move through me, not hiding from them, but also not clinging to them or letting them control me.
You may be surprised that emotions matter when debating the truth, but emotions are epistemologically important in two ways. First, our feelings are part of the world, and thus if we wish the have an accurate model of the world, that includes knowing the truth about how we feel. Second, knowing is done by us, and we're fallible, feeling beings whether we like it or not. To leave feelings out of our attempts to know the truth is to fail to consider the whole of the way in which we know. As Gendlin explains it:
But enduring the truth is rarely easy. So if you find yourself in a debate where you or anyone else is turning away from truth because it's too hard to face directly, don't berate yourself or them for having emotions. Instead, simply pause the debate. Step away, go do something else, and come back later, perhaps after a few days, to pick up the discussion again. The truth will still be there, waiting to be known.
Waking Up from the Dream
With this book, I've done my best to convince you that truth is fundamentally uncertain. If, when you started reading, you believed that truth was fixed, immutable, and objective, I hope you've come to realize that it's contingent, evolving, and intersubjective. And, assuming you've just had this realization, I'm willing to bet you're also feeling a little weird, like the ground you've been standing on all your life has fallen out from under you, or like you're suffering from a case of existential vertigo. And if you are feeling weird, there's a question that's likely on your mind. A question that you might be trying to hide from, but that you desperately need an answer to…
How do we live in a world where truth is uncertain and nothing is sure?
It may surprise you to know that you've been searching for the answer your whole life. You began that search the moment you were born. You didn't know the what or why of anything. Every experience was new, and you faced constant uncertainty about what would happen next. But with each passing day, you learned a little more, and gradually you got to know the world you lived in.
Yet for as quick as your knowledge expanded, the gaps in your knowledge expanded quicker. You'd often find yourself saying "I don't know" and feel like you were staring out into a dark void, unable to see what lay inside it. But over time, you began to see the faint outlines of what could be known, and eventually those outlines came into focus. Sure, you still didn't know everything, but at least you had a sense of where you would meet the unknown.
Your growing familiarity with the unknown also gave you an implicit choice in how to relate to it. One option was to humbly accept your limitations and learn to live with perpetual uncertainty. The other was to defy your seeming limitations and endeavor to know it all. You may not remember making an explicit choice—most people decide wordlessly before their first memories form—but a choice you made. If you're not sure how you decided, allow me to state the obvious: by virtue of being the sort of person who would choose to read a book about epistemology, so you almost certainly chose defiance.
And there's great value in choosing defiance! It's defiance of the unknown that has led people to expand our understanding of the world with science, to make our lives more comfortable with machines, and to coordinate people to make the world a better place to live. But no matter how hard we try or how pure our intentions are, no amount of defiance can overcome the hard limits of fundamental uncertainty. And when we finally come face-to-face with these limits, no matter how much we resist, we have no choice but to be humbled by truth's ultimate unsurity.
But humility doesn't come easily when you've lived a life of defiance. The desire to fight reality to extract every last ounce of truth from its dark corners doesn't subside overnight, nor should it. The relative truth is extremely valuable, and we need ever bit of it we can get if we want to understand the world well enough to make it better. Still, if we value truth, we must also value the truth of knowledge's limitations. If we are to remain honest with ourselves, we must accept when we have come to the end of our ability to know.
For a long time, I didn't want to accept any epistemic limits because I was deeply convinced of my ability to defy the unknown. Reason told me that certainty about the truth was impossible, but I couldn't shake my intuition that everything should be knowable. I spent long hours studying mathematics, the history of science, psychology, and, of course, philosophy in the hopes of either proving fundamental uncertainty wrong or finding a way to live with it. After all that, I was more convinced than ever of truth's uncertain nature. Thankfully I also learned how to live with it, though not in a way I expected.
Back when I was first grappling with fundamental uncertainty—before I even knew to call it that—I was part of an informal philosophical circle. My friends there introduced me to many books that have been important to my thinking, like Keith Johnstone's Impro, George Lakoff and Mark Johnson's Metaphors We Live By, and many more that I referenced in previous chapters. But the most important of these books were those of Robert Kegan, an adult development psychologist who posits that adult minds continue to change and grow in substantial ways after our bodies reach physical maturity. I eagerly read his first two books, The Evolving Self and In Over Our Heads, and in them I finally found a way towards reconciling my intuitions about truth with the inescapable logic of fundamental uncertainty.
Kegan argues that adulthood is not the final stage of mental development, but the start of an ongoing process of waking up from self-created dreams. That is, our minds create stories to help us understand the world, and then almost as soon as we create those stories, we confuse them for reality. We become trapped in the confines of tightly scripted thoughts, and the great challenge of adulthood is to remember that these scripts aren't the whole world. Instead, they are a tiny part of it we put there to help us live our lives, and we can replace them with better scripts or even throw them out when they are no longer useful.
Kegan importantly helped me to realize that I was trapped in a dream of my own creation, and that if I wanted to learn to live with fundamentally uncertainty, I was going to have to wake up. Unfortunately, Kegan only provided the theory. To actually wake up, I would need the help of others.
During my long hours reading philosophy, I found a few people who seemed to me to understand that knowledge is fundamental uncertainty and that reconciling defiance of uncertainty with humility towards it is hard. Starting from the present and working my way backwards through time, I traced a lineage of thought that led me from Sartre to Heidegger to Husserl to Schopenhauer to Nagarjuna to Pyrrho to, of all people, the Buddha, Siddhartha Gautama. I didn't know much about Buddhism, but it was a lead, and I hoped that from the Buddhists I could glean some practical suggestions for how to live with fundamental uncertainty.
As I studied Buddhism, I became increasing convinced that the Buddha had understood fundamental uncertainty, and that it was at the core of his teachings, even if it was often presented in ways unfamiliar to me. But I also found a lot of confusing advice that was hard to make sense of, like sit and meditate on emptiness, so I kept looking, trying to find anything else that might help me.
All I found was Buddhism. I flirted with a few other systems of practice that seemed promising, like Stoicism and Daoism, but in each case I found that they either lacked vibrate communities I could learn from or that the things that made them appealing were the ideas they had borrowed from Buddhism. So after putting aside some major reservations about getting involved with religion, I finally gave in and started practicing Zen.
As it turned out, Zen was the right thing for me. It gave me simple practices that helped me learn to live with the world as it is; it didn't demand that I adopt any supernatural beliefs; and it gave me the support of a community of people trying to do the same thing I was. I think it's very likely that, if I'd never set foot in a zendo, I never would have gotten my life together enough to write this book.
So should you run out and join the nearest Zen sangha or other Buddhist congregation? Maybe, but I don't know your life. Zen was what I needed, but you might need something else, or nothing at all. What I can tell you is that, if you're confused about how to live with fundamental uncertainty, it's possible to find a way through that confusion. You may have to look in unlikely places for help, but help is out there. And you don't need to abandon your love of reason! You need only be open to finding that which most helps you live with fundamental uncertainty.
Afterword
Thank you for reading this book. I hope you found it illuminating.
Writing this book took considerable effort. I'm not naturally gifted at explaining complicated ideas, and despite many iterations and much effort, I regret that I'll have left some readers more confused than when they started. I didn't explain many concepts in detail, and have relied on you to do your own reading to fill in the gaps. Thank you for putting in that effort, because without it I would have never been able to finish the book.
I also hope you continue to find yourself confused. Noticing confusion is essential. Confusion tells you where to start. You may not be able to eliminate confusion, but if you keep coming back to it, you may come to know it, as I have, as a constant companion on your quest to understand the world we live in.
So keep looking into the unknown. In it you'll find understanding and trust that the world is just as it is.