There was a recent discussion on Facebook that led to an ask for a description of postrationality that isn't framed in terms of how it's different from rationality (or rather perhaps more a challenge that such a thing could not be provided). I'm extra busy right now until at least the end of the year so I don't have a lot of time for philosophy and AI safety work, but I'd like to respond with at least an outline of a constructive description of post/meta-rationality. I'm not sure everyone who identifies as part of the metarationality movement would agree with my construction, but this is what I see as the core of our stance.
Fundamentally I think the core belief of metarationality is that epistemic circularity (a.k.a. the problem of the criterion, the problem of perception, the problem of finding the universal prior) necessitates metaphysical speculation, viz. we can't reliably say anything about the world and must instead make one or more guesses to overcome at least establishing the criterion for assessing truth. Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.
None of this is radical; it's in fact all fairly standard philosophy. What makes metarationality what it is comes from the deep integration of this insight into our worldview. Rather than truth or some other criteria, telos (usefulness, purpose) is the highest value we can serve, not by choice, but by the trap of living inside the world and trying to understand it from experience that is necessarily tainted by it. The rest of our worldview falls out of updating our maps to reflect this core belief.
To say a little on this, when you realize the primacy of telos in how you make judgments about the world, you see that you have no reason to privilege any particular assessment criterion except in so far as it is useful to serve a purpose. Thus, for example, rationality is important to the purpose of predicting and understanding the world often because we, through experience, come to know it to be correlated with making predictions that later happen, but other criteria, like compellingness-of-story and willingness-to-life, may be better drivers in terms of creating the world we would like to later find ourselves in. For what it's worth, I think this is the fundamental disagreement with rationality: we say you can't privilege truth and since you can't it sometimes works out better to focus on other criteria when making sense of the world.
So that's the constructive part; why do we tend to talk so much about postrationality by contrasting it with rationality? I think two reasons. One, postrationality is etiologically tied to rationality: the ideas come from people who first went deep on rationality and eventually saw what they felt were limitations of that worldview, thus we naturally tend to think in terms of how we came to the postrationalist worldview and want to show others how we got here from there. Second and relatedly, metarationality is a worldview that comes from a change in a person that many of us choose to identify with Kegan's model of psychological development, specifically the 4-to-5 transition, thus we think it's mainly worthwhile to explain our ideas to folks we'd say are in the 4/rationalist stage of development because they are the ones who can directly transition to 5/metarationality without needing to go through any other stages first.
Feel free to ask questions for clarification in the comments; I have limited energy available for addressing them but I will try my best to meet your inquiries. Also, sorry for no links; I wouldn't have written this if I had to add all the links, so you'll have to do your own googling or ask for clarification if you want to know more about something, but know that basically every weird turn of phrase above is an invitation to learn more.
I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.
So much the worse for schizophrenics. And so?
I have a hard time believing that this sort of clever reasoning will lead to anything other than making your beliefs less accurate and merely increasing the number of non-truth-based beliefs above 20%.
The only sensible response to the problem of induction is to do our best to track the truth anyway. Everybody who comes up with some clever reason to avoid doing this thinks they've found some magical shortcut, some powerful yet-undiscovered tool (dangerous in the wrong hands, of course, but a rational person can surely use it safely...). Then they cut themselves on it.
Suppose that I do a rain-making dance in my backyard, and predict that as a consequence of this, it will rain tomorrow. Turns out that it really does rain the next day. Now I argue that I have magical rain-making powers.
Somebody else objects, "of course you don't, it just happened to rain by coinci... (read more)