It seems to me that the whole circularity issue was answered by Eliezer in Where Recursive Justification Hits Bottom. What's your disagreement with that post?
Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.
Doesn't this have the standard issue with philosophical pragmatism, i.e. that knowing what is useful requires knowing about reality? (In other words, reducing questions of truth to questions of usefulness reduces one question to a different one that is no easier)
Certainly, ontologies must be selected partially based on criteria other than correspondence with reality (such as analytic tractability), but for these ontologies to be useful in modeling reality, they must be selected based on a pre-ontological epistemology, not only a pre-ontological telos.
I guess what I am getting at is: Kierkegaard's pre-ontology doesn't selectively choose an ontology that has high correspondence with reality, so he has a weak pre-ontolological epistemology. It is possible to have a better pre-ontological epistemology that Kierkegaard. Meditation probably helps, as do the principles discussed in this post on problem formulation. (To the extent that I take pre-ontology/meta-ontology seriously, I guess I might be a postrationalist according to some definitions)
A specific example of a pre-ontological epistemology is a "guess-and-check-and-refine" procedure, where you get acquainted with the phenomenon of interest, come up with some different ontologies for it, check these ontologies based on factors like correspondence with (your experience of) the phenomenon and internal coherence, and refine them when they have problems and it's possible to improve them. This has some similarities to Solomonoff induction though obviously there are important differences. Even in the absence of perfect knowledge of anything and without resolving philosophical skepticism, this procedure selectively chooses ontologies that have higher correspondence with reality.
I
...Ok. What do you think of cartography? Is mapping out a new territory using tools like measurement and spacial representation a process that does not establish truth separately from predictive accuracy?
It seems wrong (and perhaps a form of scientism) to frame cartography in terms of predictive accuracy, since, while the maps do end up having high predictive accuracy, the mapmaking process does not involve predictive accuracy directly, only observation and recording, and predictive accuracy is a side effect of the fact that cartography gives accurate maps.
This actually seems like a pretty general phenomenon: predictive accuracy can't be an input into your epistemic process, since predictions are about the future. Retrodictions (i.e. "predictions" of past events) can go into your epistemic process, but usefulness is more about predictive ability rather than retrodictive ability.
Fundamentally I think the core belief of metarationality is that epistemic circularity (a.k.a. the problem of the criterion, the problem of perception, the problem of finding the universal prior) necessitates metaphysical speculation, viz. we can’t reliably say anything about the world and must instead make one or more guesses to overcome at least establishing the criterion for assessing truth.
I don't think that's a problem. Any reasoning process needs an unquestioned "core" - that's just as true for a person as for an automatic theorem prover. And different people's "cores" seem to agree on observations a lot, making science possible.
You can decide to question any such principles, which is how they get formulated in the first place, as designs for improved cognition devised by an evolved mind that doesn't originally follow any particular crisp design, but can impose order on itself. The only situation where they remain stable is if the decision always comes out in their favor, which will happen if they are useful for agents pursuing your preference. When these agents become sufficiently different, they probably shouldn't use any object level details of the design of cognition that holds for you. The design improves, so it's not the same.
Examples of such principles are pursuit of well-calibrated empirical beliefs, of valid mathematical knowledge, of useful plans, and search for rational principles of cognition.
I don't know how to describe the thing that remains through correct changes, which is probably what preference should be, so it's never formal. There shouldn't be a motivation to "be at peace" with it, since it's exactly the thing you turn out to be at peace with, for reasons other than being at peace with it.
The part of you that's generating your thoughts is the unquestioned core. It's too late to pick the unquestioned core, you already are the unquestioned core.
The idea clone of saturn stated is discussed in the sequences, in Created Already In Motion:
The Tortoise's mind needs the dynamic of adding Y to the belief pool when X and (X→Y) are previously in the belief pool. If this dynamic is not present—a rock, for example, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y until the end of eternity, without ever getting to Y.
The phrase that once came into my mind to describe this requirement, is that a mind must be created already in motion. There is no argument so compelling that it will give dynamics to a static thing. There is no computer program so persuasive that you can run it on a rock.
And in No Universally Compelling Arguments:
...And this (I then replied) relies on the notion that by unwinding all arguments and their justifications, you can obtain an ideal philosophy student of perfect emptiness, to be convinced by a line of reasoning that begins from absolutely no assumptions.
But who is this ideal philosopher of perfect emptiness? Why, it is just the irreducible core of the ghost!
And that is why (I went on to say) the result of trying to remove all assumptions from a mind, and unwind to the perfect absence of
I find it difficult to know what to make of the concrete statements here, because they seem so obviously false.
You've no doubt experienced this first hand if you've ever seen an optical illusion
It is by our senses that we know that these things are optical illusions.
you can't directly look into the center of your own pupil
You can do just that in a mirror. BTW, the blind spot is not in the middle of the visual field. but off to the side. It is easy to see it, though, by closing one eye and attending to where the blind spot in the other eye is. When the optician shines an ophthalmoscope into my eye, I can see the blood vessels in my own retina.
we literally don't and can't know about the things our senses my obscure from us.
Our senses fail to show us X-rays, atoms, the curvature of space-time, and anything happening on the other side of the world. but we can very easily know about these things, by clever use of our senses and tools, tools that were created with the aid of the senses.
So, all of the above quotes appear to be obviously, trivially false. Is there some other interpretation, or are they mere deepities?
the postrationalist project sees [the unquestion...
Thank you for providing this information.
However, if this is really what 'postrationality' is about, then I think it remains safe to say that it is a poisonous and harmful philosophy that has no place on LW or in the rationality project.
Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.
You appear to be saying that since it's impossible to be absolutely certain that any par
...You appear to be saying that since it's impossible to be absolutely certain that any particular thing is the truth, that makes it ok to instead substitute any other easier to solve criteria. This is an incredibly weak justification for anything.
To me this is a strawmanning of postrationality into a thing I wouldn't support and is more akin to the way postmodernism had difficulty because it failed to appreciate all the work rationality does. That the ultimate criterion is telos doesn't excuse one from the need to interact with reality if one wants to successfully serve a purpose. You don't get to just "pick whatever you want" because this would be to ignore all the evidence you have about the world, although this is definitely a way people fail to understand and misapply postrationalist ideas.
This talk of alternative criteria having equal value sounds very good and cosmopolitan, but actually we know exactly what happens when you stop using truth as your criteria for "truth". Nothing good.
This sounds confused to me. I'm not saying all criteria have equal value, I'm saying they are evaluated towards their ability to help you fulfill some purpose. At the risk of putting words in
...At the risk of putting words in your mouth, it sounds instead as if you think we can assess the criterion of truth, which we cannot and have known we cannot for over 2000 years.
But of course we can, as evidenced by the fact that people make predictions that turn out to be correct, and carry out plans and achieve goals based on those predictions all the time.
We can't assess whether things are true with 100% reliability, of course. The dark lords of the matrix could always manipulate your mind directly and make you see something false. They could be doing this right now. But so what? Are you going to tell me that we can assess 'telos' with 100% reliability? That we can somehow assess whether it is true that believing something will help fulfill some purpose, with 100% reliability, without knowing what is true?
The problem with assessing beliefs or judgements with anything other than their truth is exactly that the further your beliefs are from the truth, the less accurate any such assessments will be. Worse, this is a vicious positive feedback loop if you use these erroneous 'telos' assessments to adopt further beliefs, which will most likely also be false, and make your subsequent
...But of course we can, as evidenced by the fact that people make predictions that turn out to be correct, and carry out plans and achieve goals based on those predictions all the time.
That people make predictions which turn out to be correct, does not mean show that the predictions were chosen according to the criterion of truth; it shows that the predictions happened to correlate with the truth. E.g. people just doing whatever tradition tells them to, often arrive at good outcomes when they are in an environment that the tradition is well-adapted to. If asked, they might appeal to the good outcomes as evidence for the tradition being correct; but while it might be correct in some circumstances, that does not establish that "what does tradition tell me" would be the correct criteria to use in every circumstance.
The general point here is that the human brain does not have magic access to the criteria of truth; it only has access to its own models. And what it uses to check whether its own models are correlated with the truth are... its own models. It's possible, and in fact very common, to be critically mistaken about whether or not your reasoning is actually trackin...
Well, this is a long comment, but this seems to be the most important bit:
The general point here is that the human brain does not have magic access to the criteria of truth; it only has access to its own models.
Why would you think "magic access" is required? It seems to me the ordinary non-magic causal access granted by our senses works just fine.
All that you say about beliefs often being critically mistaken due to eg. emotional attachment, is of course true, and that is why we must be ruthless in rejecting any reasons for believing things other than truth -- and if we find that a belief is without reasons after that, we should discard it. The problem is this seems to be exactly the opposite of what "postrationality" advocates: using the lack of "magic access" to the truth as an excuse to embrace non-truth-based reasons for believing things.
Why would you think "magic access" is required?
Because there's no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality. Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.
Since there's no direct causal pathway, it would have to work through some non-causal means, i.e. magic.
The problem is this seems to be exactly the opposite of what "postrationality" advocates: using the lack of "magic access" to the truth as an excuse to embrace non-truth-based reasons for believing things.
My comment was trying to explain how explicitly adopting beliefs for other reasons than truth might actually help you reject non-truthful beliefs. You can be mistaken about what's actually true, and by testing out ontologies that have been arrived at for other reasons than truth, you may find out that they actually track truth better than your original ontology did. Or even if they don't, by intentionally adopting a different ontology and noticing how it forces your perceptions to fit its mold, you may become ...
Because there’s no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality.
I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.
Schizophrenics also think that they have causal access to the truth as granted by their senses, and might maintain that belief until their death.
So much the worse for schizophrenics. And so?
“Well we can’t go below 20%, but we can influence what that 20% consists of, so let’s swap that desire to believe ourselves to be better than anyone else into some desire that makes us happier and is less likely to cause needless conflict. Also, by learning to manipulate the contents of that 20%, we become better capable at noticing when a belief comes from the 20% rather than the 80%, and adjusting accordingly”.
I have a hard time believing that this sort of clever reasoning will lead to anything other than making your beliefs less accurate and merely increasing the number of non-truth-based beliefs above 20%.
The only sensib
...I don't know what "directly" means, but there certainly is a causal pathway, and we can certainly evaluate whether our brains are tracking reality. Just make a prediction, then go outside and look with your eyes to see if it comes true.
Suppose that I do a rain-making dance in my backyard, and predict that as a consequence of this, it will rain tomorrow. Turns out that it really does rain the next day. Now I argue that I have magical rain-making powers.
Somebody else objects, "of course you don't, it just happened to rain by coincidence! You need to repeat that experiment!"
So I repeat the rain-making dance on ten separate occasions, and on seven out of ten times, it does happen to rain anyway.
The skeptic says, "ha, your rain-making dance didn't work after all!" I respond, "ah, but it did work on seven out of ten times; medicine can't be shown to reliably work every time either, but my magic dance does work statistically significantly often."
The skeptic answers, "you can't establish statistical significance without something to compare to! This happens to be rainy season, so it would rain on seven out of ten days ...
This stuff about rain dancing seems like just the most banal epistemological trivialities, which have already been dealt with thoroughly in the Sequences. The reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.
But to do that, you need to use a meta-model. When I say that we don’t have direct access to the truth, this is what I mean;
This has nothing to do with causal pathways, magic or otherwise, direct or otherwise. Magic would not turn a rock into a philosopher even if it should exist.
Yes, carrying out experiments to determine reality relies on Occam's razor. It relies on Occam's razor being true. It does not in any way rely on me possessing some magical universally compelling argument for Occam's razor. Because Occam's razor is in fact true in our universe, experiment does in fact work, and thus the causal pathway for evaluating our models does in fact exist: experiment and observation (and bayesian statistics).
I'm going to stress this point because I noticed others in this thread make this seemingly elementary map-territory confusion before (though I didn't comment on it there). In fact it seems to me now that conflatin
...reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.
Obviously. Which is why I said that the point was not any of the specific arguments in that debate - they were totally arbitrary and could just as well have been two statisticians debating the validity of different statistical approaches - but the fact that any two people can disagree about anything in the first place, as they have different models of how to interpret their observations.
"Occam's razor is true" is an entirely different thing from "I have access to universally compelling arguments for Occam's razor", as different as a raven and the abstract concept of corporate debt.
This is very close to the distinction that I have been trying to point at; thank you for stating it more clearly than I managed to. The way that I'd phrase it is that there's a difference between considering a claim to be true, and considering its justification universally compelling.
It sounds like you have been interpreting me to say something like "Occam's Razor is false because its justification is not universally compelling". That is not what I have been trying to say. Rather, my claim has
...Advocates of postrationality seem to be hoping that the fact that P(Occam's razor) < 1 makes these arguments go away. It doesn't work like that.
This (among other paragraphs) is an enormous strawman of everything that I have been saying. Combined with the fact that the general tone of this whole discussion so far has felt adversarial rather than collaborative, I don't think that I am motivated to continue any further.
It doesn't seem to be a strawman of what eg. gworley and TAG have been saying, judging by the repeated demands for me to supply some universally compelling "criterion of truth" before any of the standard criticisms can be applied. Maybe you actually disagree with them on this point?
It doesn't seem like applying full force in criticism is a priority for the 'postrationality' envisioned by the OP, either, or else they would not have given examples (compellingness-of-story, willingness-to-life) so trivial to show as bad ideas using standard arguments.
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling
But of course it wouldn’t. What? This seems completely unrelated to compellingness (universal or otherwise). I have but to build a mind that does not implement the procedure in question, or doesn’t implement it for some specific argument(s), or does implement it but then someone reverses it (cf. Eliezer’s “little grey man”), etc.
a mind-independent argument for
There is no such thing as a “mind-independent argument for” anything. That, too, was Eliezer’s point.
For example, suppose C exists. However, it is then an open question where I believe that C exists. How might I come to believe this? Perhaps I might be presented with an argument for C’s existence. I might find this argument compelling, or not. This is dependent on my mind—i.e., both on my mind existing, and on various specific properties of my
...C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.
That's not how algorithms work and seems... incoherent.
That you want to deny C is great,
I did not say that either.
because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.
No, I don't think we do agree. It seems to me you're deeply confused about all of this stuff.
Here's an exercise: Say that we replace "C" by a specific concrete algorithm. For instance the elementary long multiplication algorithm used by primary school children to multiply numbers.
Does anything whatsoever about your argument change with this substitution? Have we proved that we can explain multiplication to a rock? Or perhaps we've proved that this algorithm doesn't exist, and neither do schools?
Another exercis
...But you’ve done a sleight of hand!
First, you defined C, a.k.a. the “criterion of truth”, like this:
Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/algorithm to assess if any given statement is true.
Ok, that’s only mildly impossible, let’s see where this leads us…
But then, you say:
The counterfactual I’m proposing with C is exactly one that would allow not just any mind, but literally anything at all to comprehend A. The existence of C would create a universe wholly unlike our own, which is why I think we’re all in agreement that the existence of such a thing is extremely unlikely even though we can’t formally prove that it doesn’t exist.
Why should the thing you defined in the first quote, lead to anything even remotely resembling the second quote? There is no reason, as far as I can tell; the latter quote just adds extremely impossible magic, out of nowhere and for no reason.
- Do you think about distances in Metric or Imperial units? Both are equally true, so probably in whichever units you happen to be more fluent in.
- Do you use Newtonian mechanics or full relativity for calculating the motion of some object? Relativity is more true, but sometimes the simpler model is good enough and easier to calculate, so it may be better for the situation.
These seem like silly examples to me.
I think about distances in Imperial units, but it seems very weird, inaccurate, and borderline absurd to describe me as believing the Imperial system to be “true”, or “more true”, or believing the metric system to be “not true” or “false” or “less true”. None of those make any sense as descriptions of what I believe. Frankly, I don’t understand how you can suggest otherwise.
Similarly, it is a true fact that Newtonian mechanics allows me to calculate the motion of objects, in certain circumstances (i.e., intermediate-scale situations / phenomena), to a great degree of accuracy, but that relativity will give a more accurate result, at the cost of much greater difficulty in calculation. This is a fact which I believe to be true. Describing Relativity as being “more true” is odd
...For myself I find this point is poorly understood by most self-identified rationalists, and I think most people reading the sequences come out of them as positivists because Eliezer didn't hammer the point home hard enough and positivism is the default within the wider community of rationality-aligned folks (e.g. STEM folks).
Maybe so, but I can't help noticing that whenever I try to think of concrete examples about what postrationality implies in practice, I always end up with examples that you could just as well justify using the standard rationalist epistemology. E.g. all my examples in this comment section. So while I certainly agree that the postrationalist epistemology is different from the standard rationalist one, I'm having difficulties thinking of any specific actions or predictions that you would really need the postrationalist epistemology to justify. Something like the criterion of truth is a subtle point which a lot of people don't seem to get, yes, but it also feels like one which doesn't make any practical difference whether you get it or not. And theoretical points which people can disagree a lot about despite not making any practical difference are almost the pr
...Because there's no causal pathway through which we could directly evaluate whether or not our brains are actually tracking reality.
However, there are causal pathways through which we can evaluate whether or not our brains are tracking reality. They have been extensively written about on LessWrong over the years, and a large amount of the core material is collected in a book.
Why would we want to “go deeper on questions of epistemology than a pragmatic approach may at the moment demand”? By definition, it would seem, there is no practical (i.e. instrumental, i.e. pragmatic) reason to do so. Why, then?
I'm careful above to say "than a pragmatic approach may at the moment demand". Pragmatism has no universal ground to stand on: it's always pragmatic to the task at hand. I have a need/interest to go deeper, but others may not so they do not and that's fine; it only means that they bottom out where I want/need to go deeper, same as I take a pragmatic approach to understanding biochemistry or fluid dynamic and bottom out my inquiry much sooner than would a pharmacologist or an aeronautical engineer, respectively.
Conversely, if you start playing the "why go deeper, what's the practical reason" game, you'll quickly find there's little reason for this site or any of the activity on it to exist since, after all, you can live just fine without it (and much else besides), but since people are interested or find themselves with a need to know more to serve some end they have, we're here anyway.
P.S.: This isn’t j...
Indeed, which is why metarationality must not forget to also include all of rationality within it!
In practice this converges on "Embrace, extend, and extinguish".
Of course not, and that’s the point.
The point... is that judging beliefs according to whether they achieve some goal or anything-- is no more reliable than judging beliefs according to whether they are true, is in no way a solution to the problem of induction or even a sensible response to it, and most likely only makes your epistemology worse?
Indeed, which is why metarationality must not forget to also include all of rationality within it!
Can you explain this in a way that doesn't make it sound like an empty applause light? How can I take compellingness-of-story into account in my probability estimates without violating the Kolmogorov axioms?
To say a little more on danger, I mean dangerous to the purpose of fulfilling your own desires.
Yes, that's exactly the danger.
Unlike politics, which is an object-level danger you are pointing to, postrationality is a metalevel danger, but specifically because it’s a more powerful set of tools rather than a shiny thing people like to fight over. This is like the difference between being weary of generally unsafe conditions that cannot be used and dangerous tools that are only dangerous if used by the unskilled.
Thinking you're ski
...If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.
That's not my only disagreement. I also think that your specific proposed solution does nothing to "address" the problem (in particular because it just seems like a bad idea, in general because "addressing" it to your satisfaction is impossible), and only serves as an excuse to rationalize holding comforting but wrong beliefs under the guise of doing "advanced philosophy". This is why the “powerful but dangerous tool” rhetoric is wrongheaded. It's not a powerful tool. It doesn't grant any ability to step outside your own head that you didn't have before. It's just a trap.
It seems to be more than he can fit into the whole of his family of blogs. I've read quite a lot of what he's written, but at last I gave up on his perpetual jam-tomorrow deferral of any plain setting out of his positive ideas.
Contrast The Sequences, where Eliezer simply wrote down what he knew, with exemplary clarity. No circling around and around, but steadily marching through the partial order of dependencies of topics, making progress with every posting, that built into a whole. That is what exposition should look like. No repeated restarting with ever more fundamental things that would have to be explained first while the goal gets further away. No gesturing towards enormous reading lists as a stopgap for being unable to articulate his ideas about that material. Perhaps that is because he had something real to say?
In my experience, the subjective feeling that one understands an idea, even if it seems to have pin-sharp clarity, often does not survive trying to communicate it to another person, or even to myself by writing it down. The problem may not be that the thing is difficult to convey, but that I am confused about the thing. The thing may not exist, not correspond to anything in reality. (Explaining something to a machine, i.e. programming, is even more exposing of confusion.)
When a thing is apparently so difficult to communicate that however much ink one spills on the task, the end of it does ...
I have just recalled an anecdote about the symptoms of trying to explain something incoherent. If (so I read) you hypnotize someone and suggest to them that they can see a square circle drawn on the wall, fully circular and fully a square, they have the experience of seeing a square circle. Now, I'm somewhat sceptical about the reality of hypnosis, but not at all sceptical about the physical ability of a brain to have that experience, despite the fact that there is no such thing as a square circle.
If you ask that person (the story goes on) to draw what they see, they start drawing, but keep on erasing and trying again, frustrated by the fact that what they draw always fails to capture the thing they are trying to draw.
Edit: the story is from Edward de Bono's book "Lateral Thinking: An Introduction" (previously published as "The Use of Lateral Thinking").
Indeed, the scientific history of how observation and experiment led to a correct understanding of the phenomenon of rainbows is long and fascinating.
The moderation system we settled on gives people above a certain karma threshold the ability to moderate on their own posts, which I think is very important to allow people to build their own gardens and cultivate ideas. Discussion about that general policy should happen in meta. I will delete any further discussion of moderation policies on this post.
Two points:
Advancing the conversation is not the only reason I would write such a thing, but actually it serves a different purpose: protecting other readers of this site from forming a false belief that there's some kind of consensus here that this philosophy is not poisonous and harmful. Now the reader is aware that there is at least debate on the topic.
It doesn't prove the OP's point at all. The OP was about beliefs (and "making sense of the world"). But I can have the belief "postrationality is poisonous and harmful" without having to post a comment saying so, therefore whether such a comment would advance the conversation need not enter into forming that belief, and is in fact entirely irrelevant.
Question: how does postrationality and instrumental rationality relate to each other? To me it appears that you are simply arguing for instrumental rationality over epistemic rationality, or am I missing something?
There was a recent discussion on Facebook that led to an ask for a description of postrationality that isn't framed in terms of how it's different from rationality (or rather perhaps more a challenge that such a thing could not be provided). I'm extra busy right now until at least the end of the year so I don't have a lot of time for philosophy and AI safety work, but I'd like to respond with at least an outline of a constructive description of post/meta-rationality. I'm not sure everyone who identifies as part of the metarationality movement would agree with my construction, but this is what I see as the core of our stance.
Fundamentally I think the core belief of metarationality is that epistemic circularity (a.k.a. the problem of the criterion, the problem of perception, the problem of finding the universal prior) necessitates metaphysical speculation, viz. we can't reliably say anything about the world and must instead make one or more guesses to overcome at least establishing the criterion for assessing truth. Further, since the criterion for knowing what is true is unreliably known, we must be choosing that criterion on some other basis than truth, and so instead view that prior criterion as coming from usefulness to some purpose we have.
None of this is radical; it's in fact all fairly standard philosophy. What makes metarationality what it is comes from the deep integration of this insight into our worldview. Rather than truth or some other criteria, telos (usefulness, purpose) is the highest value we can serve, not by choice, but by the trap of living inside the world and trying to understand it from experience that is necessarily tainted by it. The rest of our worldview falls out of updating our maps to reflect this core belief.
To say a little on this, when you realize the primacy of telos in how you make judgments about the world, you see that you have no reason to privilege any particular assessment criterion except in so far as it is useful to serve a purpose. Thus, for example, rationality is important to the purpose of predicting and understanding the world often because we, through experience, come to know it to be correlated with making predictions that later happen, but other criteria, like compellingness-of-story and willingness-to-life, may be better drivers in terms of creating the world we would like to later find ourselves in. For what it's worth, I think this is the fundamental disagreement with rationality: we say you can't privilege truth and since you can't it sometimes works out better to focus on other criteria when making sense of the world.
So that's the constructive part; why do we tend to talk so much about postrationality by contrasting it with rationality? I think two reasons. One, postrationality is etiologically tied to rationality: the ideas come from people who first went deep on rationality and eventually saw what they felt were limitations of that worldview, thus we naturally tend to think in terms of how we came to the postrationalist worldview and want to show others how we got here from there. Second and relatedly, metarationality is a worldview that comes from a change in a person that many of us choose to identify with Kegan's model of psychological development, specifically the 4-to-5 transition, thus we think it's mainly worthwhile to explain our ideas to folks we'd say are in the 4/rationalist stage of development because they are the ones who can directly transition to 5/metarationality without needing to go through any other stages first.
Feel free to ask questions for clarification in the comments; I have limited energy available for addressing them but I will try my best to meet your inquiries. Also, sorry for no links; I wouldn't have written this if I had to add all the links, so you'll have to do your own googling or ask for clarification if you want to know more about something, but know that basically every weird turn of phrase above is an invitation to learn more.