There was a recent discussion on Facebook that led to an ask for a description of postrationality that isn't framed in terms of how it's different from rationality (or rather perhaps more a challenge that such a thing could not be provided). I'm extra busy right now until at least the end of the year so I don't have a lot of time for philosophy and AI safety work, but I'd like to respond with at least an outline of a constructive description of post/meta-rationality. I'm not sure everyone who identifies as part of the metarationality movement would agree with my construction, but this is what I see as the core of our stance.
Fundamentally I think the core belief of metarationality is that epistemic circularity (a.k.a. the problem of the criterion, the problem of perception, the problem of finding the universal prior) necessitates metaphysical speculation, viz. we can't reliably say anything about the world and must instead make one or more guesses to overcome at least establishing the criterion for assessing truth. Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.
None of this is radical; it's in fact all fairly standard philosophy. What makes metarationality what it is comes from the deep integration of this insight into our worldview. Rather than truth or some other criteria, telos (usefulness, purpose) is the highest value we can serve, not by choice, but by the trap of living inside the world and trying to understand it from experience that is necessarily tainted by it. The rest of our worldview falls out of updating our maps to reflect this core belief.
To say a little on this, when you realize the primacy of telos in how you make judgments about the world, you see that you have no reason to privilege any particular assessment criterion except in so far as it is useful to serve a purpose. Thus, for example, rationality is important to the purpose of predicting and understanding the world often because we, through experience, come to know it to be correlated with making predictions that later happen, but other criteria, like compellingness-of-story and willingness-to-life, may be better drivers in terms of creating the world we would like to later find ourselves in. For what it's worth, I think this is the fundamental disagreement with rationality: we say you can't privilege truth and since you can't it sometimes works out better to focus on other criteria when making sense of the world.
So that's the constructive part; why do we tend to talk so much about postrationality by contrasting it with rationality? I think two reasons. One, postrationality is etiologically tied to rationality: the ideas come from people who first went deep on rationality and eventually saw what they felt were limitations of that worldview, thus we naturally tend to think in terms of how we came to the postrationalist worldview and want to show others how we got here from there. Second and relatedly, metarationality is a worldview that comes from a change in a person that many of us choose to identify with Kegan's model of psychological development, specifically the 4-to-5 transition, thus we think it's mainly worthwhile to explain our ideas to folks we'd say are in the 4/rationalist stage of development because they are the ones who can directly transition to 5/metarationality without needing to go through any other stages first.
Feel free to ask questions for clarification in the comments; I have limited energy available for addressing them but I will try my best to meet your inquiries. Also, sorry for no links; I wouldn't have written this if I had to add all the links, so you'll have to do your own googling or ask for clarification if you want to know more about something, but know that basically every weird turn of phrase above is an invitation to learn more.
I find this view unconvincing, and here’s why.
We can, it seems to me, divide what you say in the linked comment into two parts or aspects.
On the one hand, we have “the predictive processing thing”, as you put it. Well, it’s a lot of interesting speculation, and a potentially interesting perspective on some things. So far, at least, that’s all it is. Using it as any kind of basis for constructing a general epistemology is just about the dictionary definition of “premature”.
One the other hand, we have familiar scenarios like “I will go to the beach this evening”. These are quite commonplace and not at all speculative, so we certainly have to grapple with them.
At first blush, such a scenario seems like a challenge to the “truth as a basis for beliefs” view. Will I go to the beach this evening? Well, as you say—if I believe that I will, then I will, and if I don’t, then I won’t… how can I form an accurate belief, if its truth value is determined by whether I hold it?!
… is what someone might think, on a casual reading of your comment. But that’s not quite what you said, is it? Here’s the relevant bit:
[emphasis mine]
This seems significant, and yet:
“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”
What is the difference between this, and what you said? Is merely the fact that “I will go to the beach this evening” is about the future, whereas “snow is white” is about the present? Are we saying that the problem is simply that the truth value of “I will go to the beach this evening” is as yet undetermined? Well, perhaps true enough, but then consider this:
“What is the truth value of the belief ‘it will rain this evening’? Well, if it rains this evening, then it is true; if it doesn’t rain this evening, it’s false.”
So this is about the future, and—like the belief about going to the beach—is, in some sense, “underdetermined by external reality” (at least, to the extent that the universe is subjectively non-deterministic). Of course, whether it rains this evening isn’t determined by your actions, but what difference does that make? Is the problem one of underdetermination, or agent-dependency? These are not the same problem!
Let’s return to my first example—“snow is white”—for a moment. Suppose that I hail from a tropical country, and have never seen snow (and have had no access to television, the internet, etc.). Is snow white? I have no idea. Now imagine that I am on a plane, which is taking me from my tropical homeland to, say, Murmansk, Russia. Once again, suppose I say:
“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”
For me (in this hypothetical scenario), there is no difference between this statement, and the one about it raining this evening. In both cases, there is some claim about reality. In both cases, I lack sufficient information to either accept the claim as true or reject it as false. In both cases, I expect that in just a few hours, I will acquire the relevant information (in the former case, my plane will touch down, and I will see snow for the first time, and observe it to be white, or not white; in the latter case, evening will come, and I will observe it raining, or not raining). And—in both cases—the truth of each respective belief will then come to be determined by external reality.
So the mere fact of some beliefs being “about the future” hardly justifies abandoning truth as a singular criterion for belief. As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”. (And, by the way, we have perfectly familiar conceptual tools for dealing with such cases: subjective probability. What is the truth value of the belief “it will rain this evening”? But why have such beliefs? On Less Wrong, of all places, surely we know that it’s more proper to have beliefs that are more like “P(it will rain) = 0.25, P(it won’t rain) = 0.75”?)
So let’s set the underdetermination point aside. Might the question of agent-dependency trouble us more, and give us reason to question the solidity of truth as a basis for belief? Is there something significant to the fact that the truth value of the belief “I will go to the beach this evening” depends on my actions?
There is at least one (perhaps trivial) sense in which the answer is a firm “no”. So what if my actions determine whether this particular belief is true? My actions are part of reality, just like snow, just like rain. What makes them special?
Well—you might say—what makes my actions special is that they depend on my decisions, which depend (somehow) on my beliefs. If I come to believe that I will go to the beach, then this either is identical to, or unavoidably causes, my deciding to go to the beach; and deciding to go to the beach causes me to take the action of going to the beach. Thus my belief determines its own truth! Obviously it can’t be determined by its truth, in that case—that would be hopelessly circular!
Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making. For example, “beliefs are prior to decisions” is necessary in order for there to be any circularity, and yet it is, at best, a supremely dubious axiom. Note that reversing that priority makes the circularity go away, leaving us with a naturalistic account of agent-dependent beliefs; free-will concerns remain, but those are not epistemological in nature.
And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live in. If we take this view, then we are simply done: we have brought “I will go to the beach this evening” in line with “it will rain this evening”, which we have already seen to be no different from “snow is white”. All are simply beliefs about reality. As you gain more information about reality, each of these beliefs might be revealed to be true, or not true.
Very well, but suppose an account (like shminux’s) that leaves no room at all for decision-making is too radical for us to stomach. Suppose we reject it. Is there, then, something special about agent-dependent beliefs?
Let us consider again the belief that “I will go to the beach this evening”. Suppose I come to hold this belief (which, depending on which parts of the above logic we find convincing, either brings about, or is the result of, my decision to go to the beach this evening.) But suppose that this afternoon, a tsunami washes away all the sand, and the beach is closed. Now my earlier belief has turned out to be false—through no actions or decisions on my part!
“Nitpicking!”, you say. Of course unforeseen situations might change my plans. Anyway, what you really meant was something like “I will attempt to go to the beach this evening”. Surely, an agent’s attempt to take some action can fail; there is nothing significant about that!
But suppose that this afternoon, I come down with a cold. I no longer have any interest in beachgoing. Once again, my earlier belief has turned out to be false.
More nitpicking! What you really meant was “I will intend to go to the beach this evening, unless, of course, something happens that causes me to alter my plans.”
But suppose that evening comes, and I find that I just don’t feel like going to the beach, and I don’t. Nothing has happened to cause me to alter my plans, I just… don’t feel like it.
Bah! What you really meant was “I intend to go to the beach, and I will still intend it this evening, unless of course I don’t, for some reason, because surely I’m allowed to change my mind?”
But suppose that evening comes, and I find that not only do I not feel like going to the beach, I never really wanted to go to the beach in the first place. I thought I did, but now I realize I didn’t.
In summary:
There is nothing special about agent-dependent beliefs. They can turn out to be true. They can turn out to be false. That is all.
Conflating beliefs with intentions, decisions, or actions, is a mistake as unfortunate as it is elementary.
And forgetting about probability is, probably, most unfortunate of all.
I agree that whether or not the belief is about something that happens in the future is irrelevant (at least if we're talking about physics-time; ScottG's original post was specific to say that it was about logical time). I think that I also agree that shinmux's view is a consistent way of looking at this. But as you say, if you did adopt that view, then we can't really talk about how to make decisions in the first place, and it would be nice if we could. (Hmm, are we rejecting a true view because it's not useful, in favor of tryin... (read more)