There was a recent discussion on Facebook that led to an ask for a description of postrationality that isn't framed in terms of how it's different from rationality (or rather perhaps more a challenge that such a thing could not be provided). I'm extra busy right now until at least the end of the year so I don't have a lot of time for philosophy and AI safety work, but I'd like to respond with at least an outline of a constructive description of post/meta-rationality. I'm not sure everyone who identifies as part of the metarationality movement would agree with my construction, but this is what I see as the core of our stance.
Fundamentally I think the core belief of metarationality is that epistemic circularity (a.k.a. the problem of the criterion, the problem of perception, the problem of finding the universal prior) necessitates metaphysical speculation, viz. we can't reliably say anything about the world and must instead make one or more guesses to overcome at least establishing the criterion for assessing truth. Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.
None of this is radical; it's in fact all fairly standard philosophy. What makes metarationality what it is comes from the deep integration of this insight into our worldview. Rather than truth or some other criteria, telos (usefulness, purpose) is the highest value we can serve, not by choice, but by the trap of living inside the world and trying to understand it from experience that is necessarily tainted by it. The rest of our worldview falls out of updating our maps to reflect this core belief.
To say a little on this, when you realize the primacy of telos in how you make judgments about the world, you see that you have no reason to privilege any particular assessment criterion except in so far as it is useful to serve a purpose. Thus, for example, rationality is important to the purpose of predicting and understanding the world often because we, through experience, come to know it to be correlated with making predictions that later happen, but other criteria, like compellingness-of-story and willingness-to-life, may be better drivers in terms of creating the world we would like to later find ourselves in. For what it's worth, I think this is the fundamental disagreement with rationality: we say you can't privilege truth and since you can't it sometimes works out better to focus on other criteria when making sense of the world.
So that's the constructive part; why do we tend to talk so much about postrationality by contrasting it with rationality? I think two reasons. One, postrationality is etiologically tied to rationality: the ideas come from people who first went deep on rationality and eventually saw what they felt were limitations of that worldview, thus we naturally tend to think in terms of how we came to the postrationalist worldview and want to show others how we got here from there. Second and relatedly, metarationality is a worldview that comes from a change in a person that many of us choose to identify with Kegan's model of psychological development, specifically the 4-to-5 transition, thus we think it's mainly worthwhile to explain our ideas to folks we'd say are in the 4/rationalist stage of development because they are the ones who can directly transition to 5/metarationality without needing to go through any other stages first.
Feel free to ask questions for clarification in the comments; I have limited energy available for addressing them but I will try my best to meet your inquiries. Also, sorry for no links; I wouldn't have written this if I had to add all the links, so you'll have to do your own googling or ask for clarification if you want to know more about something, but know that basically every weird turn of phrase above is an invitation to learn more.
Suppose that I do a rain-making dance in my backyard, and predict that as a consequence of this, it will rain tomorrow. Turns out that it really does rain the next day. Now I argue that I have magical rain-making powers.
Somebody else objects, "of course you don't, it just happened to rain by coincidence! You need to repeat that experiment!"
So I repeat the rain-making dance on ten separate occasions, and on seven out of ten times, it does happen to rain anyway.
The skeptic says, "ha, your rain-making dance didn't work after all!" I respond, "ah, but it did work on seven out of ten times; medicine can't be shown to reliably work every time either, but my magic dance does work statistically significantly often."
The skeptic answers, "you can't establish statistical significance without something to compare to! This happens to be rainy season, so it would rain on seven out of ten days anyway!"
I respond, "ah, but notice how it is the custom for people in my tribe do the rain-making dance every day during rainy season, and to not do it during dry season; it is our dance that causes the rainy season."
The skeptic facepalms. "Your people have developed a tradition to dance during rainy season, but it's the rain that has caused your dance, not the other way around!"
... and then we go on debating forever.
My point here is that just looking at raw observations is insufficient to judge any nontrivial model. We are always evaluating our observations in light of an existing model; it is the observation + model that says whether something is true, not the observation itself. I dance and it rains, and my model says that dancing causes rain: my predicted observation came true, so I consider my model validated. The skeptic's model says that dancing does not cause rain but that it rains all the time during the rainy season anyway, so he consider his own model just as confirmed by the observation.
You can, of course, use observations to evaluate models. But to do that, you need to use a meta-model. When I say that we don't have direct access to the truth, this is what I mean; both you, me, and the schizophrenic all tend to think that we are correctly drawing the right conclusions from our observations, but at least one of us is actually running seriously flawed models and meta-models, and may never know it, being trapped in evaluating all of their models through seriously flawed meta-models.
As clone of saturn notes, the deepest meta-model of them all is the one that is running below the level of conscious decisions; the set of low-level processes which decides what actions we take and what thoughts we think. This is a reinforcement learning system which responds to rewards: if particular thoughts or assumptions (such as assumption of a rain-making dance actually producing rain, or the suggestion of statistical significance being an important factor to consider when evaluating predictions) have led to actions which brought the organism (internally or externally generated rewards), then those kinds of thoughts and assumptions will be reinforced.
In other words, we end up having the kinds of beliefs that seem useful, as evaluated by whether they succeed in giving us rewards. Epistemic and instrumental rationality were the same all along. (I previously discussed this in more detail in my posts What are concepts for and World-models as tools.)
Well:
You talk about "clever reasoning" that "makes your beliefs less accurate", but as these examples should hopefully demonstrate, at any given time there are an infinite number of more-or-less true ways of looking at some situation - and when we need to choose between several ways of framing the situation which are equally true, we always end up choosing one or the other based on its usefulness. If we didn't, it would be impossible to function, since there'd be no criteria for choosing between them. (And sometimes we go with the approximation that's less strictly true, if it's good enough for the situation; that is, if it's more useful to go with it.) That's the 20%.