We in the rationalist community have believed feasible a dual allegiance to instrumental and epistemic rationality because true beliefs help with winning, but semi-autonomous near and far modes raise questions about the compatibility of the two sovereigns' jurisdictions: false far beliefs may serve to advance near interests.

First, the basics of construal-level theory. See Trope and Liberman, "Construal-Level Theory of Psychological Distance" (2010) 117 Psychological Review 440. When you look at an object in the distance and look at the same object nearby, you focus on different features. Distal information is high-level, global, central, and unchanging, whereas local information is low-level, detailed, incidental, and changing. Construal-level theorists term distal information far and local information near, and they extend these categories broadly to embrace psychological distance. Dimensions other than physical distance can be conceived as psychological distance by analogy, and these other dimensions invoke mindsets similar to those physical distance invokes.

I discuss construal-level theory in the blogs Disputed Issues and Juridical Coherence, but Robin Hanson at Overcoming Bias has been one of the theory's most prolific advocates. He gives the theory an unusual twist when he maintains that the "far" mode is largely consumed with the management of social appearances.

With this twist, Hanson effectively drives a wedge between instrumental and epistemic rationality because far beliefs may help with winning despite or even because of their falsity. Hanson doesn't shrink from the implications of instrumental rationality coupled with his version of construal-level theory. Based on research reports that the religious lead happier and more moral lives, Robin Hanson now advocates becoming religious:

Perhaps, like me, you find religious beliefs about Gods, spirits, etc. to be insufficiently supported by evidence, coherence, or simplicity to be a likely approximation to the truth. Even so, ask yourself: why care so much about truth? Yes, you probably think you care about believing truth – but isn’t it more plausible that you mainly care about thinking you like truth? Doesn’t that have a more plausible evolutionary origin than actually caring about far truth? ("What Use Far Truth?")

Instrumental rationalists could practice strict epistemic rationality if  they define winning as gaining true belief, but though no doubt a dedicated intellectual, even Hanson doesn't value truth that much, at least not "far" truth. Yet, how many rationalists have cut their teeth on the irrationality of religion? How many have replied to religious propaganda about the benefits of religion with disdain for invoking mere prudential benefit where truth is at stake? As an ideal, epistemic rationality, it seems to me, fares better than instrumental rationality.

New Comment
53 comments, sorted by Click to highlight new comments since:

Yes, you probably think you care about believing truth – but isn’t it more plausible that you mainly care about thinking you like truth? Doesn’t that have a more plausible evolutionary origin than actually caring about far truth?

Imagine I told Robin Hanson I liked the way chocolate tastes. Do you think he'd reply: "Yes, you probably think you like the taste of chocolate – but isn’t it more plausible that you mainly care about eating calorically dense foods so you can store up fat for the winter? Doesn’t that have a more plausible evolutionary origin than actually caring about the taste of chocolate?" Of course not, because that would sound silly. It's only for abstract intellectual desires that someone can get away with a statement like that.

If evolution "wants" you to eat calorically dense foods it doesn't make you actually want calories, it just makes you like the way the foods taste. And if evolution "wants" you to appear to care about truth to impress people the most efficient way for it to accomplish that is to make you actually care about the truth. That way you don't have to keep your lies straight. People don't think they care about the truth, they actually do.

I know that that's Hanson's quote, not yours, but the fact that you quote it indicates you agree with it to some extent.

[-]wilkox160

This is like saying "if evolution wants a frog to appear poisonous, the most efficient way to accomplish that is to actually make it poisonous". Evolution has a long history of faking signals when it can get away with it. If evolution "wants" you to signal that you care about the truth, it will do so by causing you to actually care about the truth if and only if causing you to actually care about the truth has a lower fitness cost than the array of other potential dishonest signals on offer.

Poisonousness doesn't change appearance though. Being poisonous and looking poisonous are separate evolutionary developments. Truth seeking values, on the other hand, affect behavior as much as an impulse to fake truth seeking values, and fake truth seeking values are probably at least as difficult to implement, most likely more so, requiring the agent to model real truth seeking agents.

For one thing, if some people have actual truth-seeking values competing with people who have false truth-seeking values, the ones looking for actual truth have a good chance to find out about and punish the ones who are falsely seeking truth. This means fake truth-seekery needs to be significantly more efficient/less risky than actual truth seeking to be the expected result of a process that selects for appearances of truth seeking.

This is like saying "if evolution wants a frog to appear poisonous, the most efficient way to accomplish that is to actually make it poisonous".

The only reason making some frogs look poisonous works is because there are already a lot of poisonous frogs around whose signal most definitely isn't fake. Faking signals only works if there are lot of reliable signals in the environment to be confused with. So there must, at the very least, be a large amount of truth-seeking humans out there. And I think that a site like "Overcoming Bias" would self select for the truth-seeking kind among its readership.

I don't know if any studies have been done with truth-seeking, but this is definitely the case with morality. The majority of humans have consciences, they care about morality as an end in itself to some extent. But some humans (called sociopaths) don't care about morality at all, they're just faking having conscience. However, sociopaths only make up about 1/25 of the population at most, their adaptation is only fit if there are a lot of moral humans around to trick with it.

[-][anonymous]40

I know that that's Hanson's quote, not yours, but the fact that you quote it indicates you agree with it to some extent.

A very fair assessment. (But I'll tell you what I disagree with in Hanson's view in a second.) I do think Hanson is correct that, in as far as evolutionary psychology has established anything, it has shown that evolution has provided for signaling and deception. If it enhances fitness for others to think you care about truth, evolution will tend to favor creating the impression to a greater extent than is warranted by the facts of your actual caring. See, for example, Robert Trivers recent book. Hanson maintains that the "far" mode (per construal-level theory--you can't avoid taking that into consideration in evaluating Hanson's position) evolved as a semi-separate mental compartment largely to accommodate signaling for purposes of status-seeking or ostentation. (The first part of my "Construal-level theory: Matching linguistic register to the case's granularity" provides a succinct summary of construal-level theory without Hanson's embellishments.)

I disagree with Hanson in his tendency to overlook the virtues of far thinking and overstate those of "near" thinking--his over-emphasis of signaling in the role of "far" thought. I disagree with Hanson in his neglect of the banefulness of moralism in general and religion in particular. Many of the virtues he paints religion as having reflect in-group solidarity and are bought at the expense of xenophobia and cultural extra-punitiveness. And also, as Eliezer Yudkowsky points out, falsehood has broader consequences that escape Hanson's micro-vision; forgoing truth produces a general loss of intellectual vitality.

But more than anything, I reject on ethical grounds Hanson's tacit position that what evolution cares about is what we should care about. If the ideal of truth was created for show, that shouldn't stop us from using it as a lever to get ourselves to care more about actual truth. (To me, a rationalist is above all one who values, even hyper-values, actual truth.) To put it bluntly, I don't see where Hanson can have further intellectual credibility after signaling that he doesn't seek truth in his far beliefs: those being the beliefs he posts about.

I am very happy to see your clarification, you write pretty much nothing I disagree with. I think the only place where we might disagree is the mechanism by which evolution accomplishes its goal of providing for signalling and deception. I believe that evolution usually gives us desires for things like truth and altruism which are, on some mental level, completely genuine. It then gives us problems like akrasia, laziness, and self-deception, which are not under full control of our conscious minds, and which thwart us from achieving our lofty goals when doing so might harm our inclusive genetic fitness.

Therefore, I think that people are entirely truthful in stating the have high and lofty ideals like truth-seeking, they are just sabotaged by human weakness. I think someone who says "I am a truthseeker" is usually telling the truth, even if they spend more time playing Halo than they do reading non-fiction. To me saying someone doesn't care very much about truthseeking because their behavior does not always seek the truth is like saying someone doesn't care very much about happiness because they have clinical depression.

I cannot quite tell from your comments whether you hold the same views on this as I do or not, as you do not specify how natural selection causes people to signal and deceive.

[-][anonymous]30

Therefore, I think that people are entirely truthful in stating the have high and lofty ideals like truth-seeking, they are just sabotaged by human weakness. I think someone who says "I am a truthseeker" is usually telling the truth, even if they spend more time playing Halo than they do reading non-fiction.

From a construal-level-theory standpoint, we should be talking about people who value truth at an abstract-construal level (from "far") but whose concrete-construal-level inclinations ("near") don't much involve truth seeking. Some people might be inclined to pursue truth both near and far but might be unable to effectively because of akrasia (which I think another line of research, ego-depletion theory, largely reduces to "decision fatigue").

So, the first question is whether you think there's a valid distinction to be made, such as I've drawn above. The second is, if you agree on the distinction, what could cause people to value truth from far but have little inclination to pursue it near. Consider the religious fundamentalist, who thinks he wants truth but tries to find it by studying the Bible. If this is an educated person, I think one can say this fundamentalist has only an abstract interest in truth. How he putatively pursues truth shows he's really interested in something else.

The way evolution could produce signaling is by creating a far system serving signaling purposes. This isn't an either-or question, in that even Hanson agrees that the far system serves purposes besides signaling. But he apparently thinks the other purposes are so meager that the far system can be sacrificed to signaling with relative impunity. The exact extent to which the far system evolved for signaling purposes is a question I don't know the answer to. But where Hanson goes dangerous is in his contempt for the integrity of far thinking and his lack of interest in integrating it with near thinking, at least for the masses and even for himself.

A rationalist struggles to turn far thinking to rational purpose, regardless of its origins. Hanson is the paradox of an intellectual who thinks contemptuous far thoughts about far thinking.

It seems to follow from this line of reasoning that after evolving in a complex environment, I should expect to be constructed in such a way as to care about different things at different times in different contexts, and to consider what I care about at any given moment to be the thing I "really" care about, even if I can remember behaving in ways that are inconsistent with caring about it.

Which certainly seems consistent with my observations of myself.

It also seems to imply that statements like "I actually care about truth" are at best approximate averages, similar to "Americans like hamburgers."

It seems to follow from this line of reasoning that after evolving in a complex environment, I should expect to be constructed in such a way as to care about different things at different times in different contexts, and to consider what I care about at any given moment to be the thing I "really" care about, even if I can remember behaving in ways that are inconsistent with caring about it.

Other possibilities:

  • Evolution could also make you simply care about lots and lots of different things and simply have them change in salience as per your situation. This seems to fit well with the concept of complexity of value.
  • Evolution could give you stable preferences and then give you akrasia so you screw them up if you end up in an environment where they are maladaptive.
  • Some combination of these.

Can you clarify how one might tell the difference between caring about different things at different times in different contexts, and caring about lots of different things that change in salience as per my situation? I agree with you that the latter is just as likely, but I also can't imagine a way of telling the two apart, and I'm not entirely convinced that they aren't just two labels for the same thing.

Similar things are true about having akrasia based on context vs. having how much I care about things change based on context.

I think that the fact that people exhibit prudence is evidence for caring about many things that change in salience. For instance, if I'm driving home from work and I think "I need groceries, but I'm really tired and don't want to go to the grocery story," there's a good chance I''ll make myself go anyway. That's because I know that even if my tiredness is far more salient now, I know that having food in my pantry will be salient in the future.

I suppose you could model prudence as caring about different things in different contexts, but you'd need to add that you nearly always care about ensuring a high future preference satisfaction state on top of whatever you're caring about at the moment.

I'm not exactly sure I follow you here, but I certainly agree that we can care about more than one thing at a time (e.g., expectation of future food and expectation of future sleep) and weigh those competing preferences against one another.

There should always come a point at which epistemic rationality gives way to instrumental rationality.

Consider: Omega appears and tells you that unless you cease believing in the existence of red pandas, it will destroy everything you value in the world.

Now suppose Omega has a good track record of doing this, and it turns out for whatever reason that it wouldn't be too hard to stop believing in red pandas. Then given how inconsequential your belief in red pandas probably is, it seems that you ought to stop believing in them.

This is a trivial example, but it should illustrate the point: if the stakes get high enough, it may be worthwhile sacrificing epistemic rationality to a greater or lesser degree.

I agree that the conclusion follows from the premises, but nonetheless it's hypothetical scenarios like this which cause people to distrust hypothetical scenarios. There is no Omega, and you can't magically stop believing in red pandas; when people rationalize the utility of known falsehoods, what happens in their mind is complicated, divorces endorsement from modeling, and bears no resemblance to what they believe they're doing to themselves. Anti-epistemology is a huge actual danger of actual life,

Absolutely! I'm definitely dead set against anti-epistemology - I just want to make the point that that's a contingent fact about the world we find ourselves in. Reality could be such that anti-epistemology was the only way to have a hope of survival. It isn't, but it could be.

Once you've established that epistemic rationality could give way to instrumental rationality, even in a contrived example, you then need to think about where that line goes. I don't think it's likely to be relevant to us, but from a theoretical view we shouldn't pretend the line doesn't exist.

Indeed, advocating not telling people about it because the consequences would be worse is precisely suppressing the truth because of the consequnces ;) (well, it would be more on-topic if you were denying the potential utility of anti-epistemology even to yourself...)

you can't magically stop believing in red pandas

Technically you can; it's just that the easiest methods have collateral effects on your ability to do most other things.

If you're not talking about shooting yourself in the head, I don't know of any method I, myself, could use to stop believing in pandas.

Interesting given that you believe there is evidence that could convince you 2+2=3.

Given that you don't know of such a method, I would guess that you haven't tried very hard to find one.

I don't think this is a fair analogy. We're talking about ceasing to believe in red pandas without the universe helping; the 2+2=3 case had the evidence appearing all by itself.

I think I might be able to stop believing in red pandas in particular if I had to (5% chance?) but probably couldn't generalize it to most other species with which I have comparable familiarity. This is most likely because I have some experience with self-hacking. ("They're too cute to be real. That video looks kind of animatronic, doesn't it, the way they're gamboling around in the snow? I don't think I've ever seen one in real life. I bet some people who believe in jackalopes have just never been exposed to the possibility that there's no such thing. Man, everybody probably thinks it's just super cute that I believe in red pandas, now I'm embarrassed. Also, it just doesn't happen that a lot rides on me believing things unless those things are true. Somebody's going to an awful lot of effort to correct me about red pandas. Isn't that a dumb name? Wouldn't a real animal that's not even much like a panda be called something else?")

Alicorn is correct; and similarly, there is of course a way I could stop believing in pandas, in worlds where pandas never had existed and I discovered the fact. I don't know of anything I can actually do, in real life, over the next few weeks, to stop believing in pandas in this world where pandas actually do exist. I would know that was what I was trying to do, for one thing.

[-]Shmi20

I can actually do, in real life, over the next few weeks, to stop believing in pandas in this world where pandas actually do exist.

Not that hard. Jimmy will gladly help you.

Okay, so there's no such thing as jackalopes. Now I know.

Hee hee.

I don't think this is a fair analogy.

I wasn't making an analogy exactly. Rather, that example was used to point out that there appears to be some route to believing any proposition that isn't blatant gibberish. And I think Eliezer is the sort of person who could find a way to self-hack in that way if he wanted to; that more or less used to be his 'thing'.

Wouldn't a real animal that's not even much like a panda be called something else?

Exactly - "red pandas" were clearly made up for Avatar: the Last Airbender.

No, in AtLA they're called "fire ferrets".

[-]asr20

There isn't an Omega, but there historically have been inquisitions, secret police, and all manner of institutional repressive forces. And even in free countries, there is quite powerful social pressure to conform.

It may often be more useful to adopt a common high-status false belief, than to pay the price of maintaining a true low-status belief. This applies even if you keep that belief secretly -- there's a mental burden in trying to to systematically keep clear your true beliefs from your purported beliefs, and there's a significant chance of letting something slip.

To pick a hopefully non-mind-killing example: whether or not professional sports are a wasteful and even barbaric practice, knowing about the local team, and expressing enthusiasm for them, is often socially helpful.

[-]Shmi20

Anti-epistemology is a huge actual danger of actual life,

So it is, but I'm wondering if anyone can suggest a (possibly very exotic) real-life example where "epistemic rationality gives way to instrumental rationality."? Just to address the "hypothetical scenario" objection.

EDIT: Does the famous Keynes quote "Markets can remain irrational a lot longer than you and I can remain solvent." qualify?

Situations of plausible deniability for politicians or people in charge of large departments at corporations. Of course you could argue that these situations are bad for society in general, but I'd say it's in the instrumental interest of those leaders to seek the truth to a lesser degree.

Any time you have a bias you cannot fully compensate for, there is a potential benefit to putting instrumental rationality above epistemic.

One fear I was unable to overcome for many years was that of approaching groups of people. I tried all sorts of things, but the best piece advice turned out to be: "Think they'll like you." Simply believing that eliminates the fear and aids in my social goals, even though it sometimes proves to have been a false belief, especially with regard to my initial reception. Believing that only 3 out of 4 groups will like or welcome me initially and 1 will rebuff me, even though this may be the case, has not been as useful as believing that they'll all like me.

It doesn't sound like you were very successful at rewriting this belief, because you admit in the very same paragraph that your supposedly rewritten belief is false. What I think you probably did instead is train yourself to change the subject of your thoughts in that situation from "what will I do if they don't like me" to "what will I do if they like me", and maybe also rewrite your values so that you see being rebuffed as inconsequential and not worth thinking about. Changing the subject of your thoughts doesn't imply a change in belief unless you believe that things vanish when you stop thinking about them.

Let's suppose that if you believe that when you believe you have a chance X to succeed, you actually have a chance 0.75 X to succeed (because you can't stop your beliefs from influencing your behavior). The winning strategy seems to believe in 100% success, and thus succeed in 75% of cases. On the other hand, trying too much to find a value of X which brings exact predictions, would bring one to believing in 0% success... and being right about it. So in this (not so artificial!) situation, a rationalist should prefer success to being right.

But in real life, unexpected things happen. Imagine that you somehow reprogram yourself to genuinely believe that you have 100% of success... and then someone comes and offers you a bet: you win $100 if you succeed, and lose $10000 if you fail. In you genuinely believe in 100% success, this seems like an offer of free money, so you take the bet. Which you probably shouldn't.

For an AI, a possible solution could be this: Run your own simulation. Make this simulation believe that the chance of success is 100%, while you know that it really is 75%. Give the simulation access to all inputs and outputs, and just let it work. Take control back when the task is completed, or when something very unexpected happens. -- The only problem is to balance the right level of "unexpected"; to know the difference between random events that belong to the task, and the random events outside of the initially expected scenario.

I suppose evolution gave us similar skills, though not so precisely defined as in the case of AI. An AI simulating itself would need twice as much memory and time; instead of this, humans use compartmentalization as an efficient heuristic. Instead of having one personality that believes in 100% success, and another that believes in 75%, human just convices themselves that the chance of success is 100%, but prevents this belief from propagating too far, so they can take the benefits of the imaginary belief, while avoiding some of its costs. This heuristic is a net advantage, though sometimes it fails, and other people may be able to exploit it: to use your own illusions to bring you to a logical decision that you should take the bet, while avoiding a suspicion of something unusual. -- In this situation there is no original AI which could take over control, so this strategy of false beliefs is accompanied by a rule "if there is something very unusual, avoid it, even if it logically seems like the right thing to do". It means to not trust your own logic, which in a given situation is very reasonable.

I do this every day, correctly predicting I'll never succeed at stuff and not getting placebo benefits. Don't dare try compartmentalization or self delusion for the reasons Eliezer has outlined. Some other complicating factors. Big problem for me.

Be careful of this sort of argument, any time you find yourself defining the "winner" as someone other than the agent who is currently smiling from on top of a giant heap of utility.

(from "Newcomb's Problem and Regret of Rationality")

Yea, I know that, but I'm not convinced fooling myself wont result in something even worse. Better ineffectively doing good than effectively doing evil.

As part of a fitness reigime, you might try to convince yourself that "I have to do 50 pressups every day". Strictly speaking, you don't: if you do fewer every now and again it won't matter too much. Nonetheless, if you believe this your will will crumble and you'll slack of too regularly. So you try to forget about that fact.

Kind of like an epistemic Schelling point.

Idea I got just now and haven't though about for 5 min or looked for flaws in yet but an stating before I forget it:

Unless omega refers to human specific brain-structures, shouldn't UDT automaticaly "un-update" on the existance of red pandas in this case?

Also, through some unknown intuitive pathways, the unsolvedness of logical uncertainty is brought up a an association.

Let me just repeat that Robin's "What use far truth" is thought-provoking and worth reading. There is also a follow-up article, posted the day after.

Seems to me the lesson is: you should use the truth in your life, because if you do not use it, then you really have no reason to care about it and you can replace it with something more useful.

Moved to Discussion.

I support your increased moderator-ness and think that I'm probably representative of LessWrong in that regard. It really makes a difference: the average quality of Main posts drastically affects whether or not I'm willing to recommend LessWrong to bright Cal students I meet or what have you.

Instrumental rationality is the one that actually matters. It's just that, regardless of your goals, figuring out what's going on is useful, hence the discussion of epistemic rationality.

The most obvious point of conflict is when further learning reaches diminishing returns. At some point, if you want to be instrumentally rational, you must actually do something.

Is .IR still the one that matters if you terminally value truth?

If your terminal value is your knowledge, then the two are the same.

It's sort of like how your instrumental rationality and your ability to maximize paperclips are the same if you happen to be a paperclip maximizer.

Cool. So now there are two ways to make safe superinelligence. You can give it terminal values corresponding to human morality; or you can give it make it value knowledge and truth, as suggested by Wei Dai , Richard Loosemore....an Artificial Philosophical, a Genie that Does Care.

Huh?

If it terminally values knowledge, then it's instrumental rationality will be it's epistemic rationality, but neither of those are your terminal rationality. From your point of view, creating an AI that consumes the universe as it tries to learn as much as it can would be a very bad idea.

I don't know what you mean by MY rationality. People who teach rationality teach the same aims and rules to everyone.

You have tacitly assumed that a knowledge-valuing SAI would never realise that turning people into computronium is wrong...that it would never understand morality, or that moral truths cannot be discovered irrespective of cognitive resources.

I don't know what you mean by MY rationality. People who teach rationality teach the same aims and rules to everyone.

You are suggesting we teach this AI that knowledge is all that matters. These are certainly not the aims I'd teach everyone, and I'd hope they're not the aims you'd teach everyone.

You have tacitly assumed that a knowledge-valuing SAI would never realise that turning people into computronium is wrong

It may realise that, but that doesn't mean it would care.

It might care, and it would still be a pretty impressive knowledge-maximizer if it did, but not nearly as good as one that didn't.

Of course, that's just arguing definitions. The point you seem to be making is that the terminal values of a sufficiently advanced intelligence converge. That it would be much more difficult to make an AI that could learn beyond a certain point, and continue to pursue its old values of maximizing knowledge, or whatever they were.

I don't think values and intelligence are completely orthogonal. If you built a self-improving AI without worrying about giving it a fixed goal, there probably are values that it would converge on. It might decide to start wireheading, or it might try to learn as much as it can, or it might generally try to increase its power. I don't see any reason to believe it would necessarily converge on a specific one.

But let's suppose that it does always converge. I still think there's protections it could do to prevent its future self from doing that. It might have a subroutine that takes the outside view, and notices that it's not maximizing knowledge as much as it should, and tweaks its reward function against its bias to being moral. Or it might predict the results of an epiphany, notice that it's not acting according to its inbuilt utility function, declare the epiphany a basilisk, and ignore it.

Or it might do something I haven't thought of. It has to have some way to keep itself from wireheading and whatever other biases might naturally be a problem of intelligence.

How many have replied to religious propaganda about the benefits of religion with disdain for invoking mere prudential benefit where truth is at stake?

That's a particular blindness of most rationalist communities - the disdain for winning through anything but epistemic rationality. I'm encouraged by the many people here who can rationally analyze winning and see that epistemic rationality is only one means to winning.

Beliefs can help you win in a great many ways. Predictive accuracy is only one of those ways, and given the interdependence of humans, predictive accuracy is probably less important for most people than having the beliefs that make for social winning.

What I prefer is to be in a place where the socially winning beliefs are the epistemically accurate beliefs, but that's more the exception than the rule in this world, at least for "far" beliefs.

Besides religious belief, probably the biggest beneficial delusion revolves around confidence and self-image. Modern social situations are much less acountable than hunter-gatherer bands. It's massively beneficial to play higher status than you are, and the easiest way to do this is to be deluded that you're more competent, entitled, and well-liked.

Even then there might be other -instrumental- shortcomings to certain instrumental strategies, such as being religious, besides forfeiting truth, and some may be more conspicuous than others. For instance, believing in gods and an afterlife would make it all the more unlikely to develop life-extension techniques. Advocating happiness for its own sake based on a misconception that dulls your grip on reality is somewhere close next to wireheading I think.

I have no idea how you would avoid Belief in Belief and the self deception that you have deceived yourself. Being that Self Deception can get really meta, really fast I have to echo EY's skepticism that such a move is even possible, though I believe do see what Hanson is doing.

It seems this line of inquiry should be approached with an eye to clarify how much one truly value's truth. I find myself asking "Is this my true rejection?" as I read those recent blog posts and I find myself reluctant to announce "If I could reject unfashionable far truth and only value near truth I would but since I cannot I will do my best to be a good Far Truth-Seeker."

[-]Shmi-10

Even so, ask yourself: why care so much about truth? Yes, you probably think you care about believing truth – but isn’t it more plausible that you mainly care about thinking you like truth? Doesn’t that have a more plausible evolutionary origin than actually caring about far truth?

Doesn't "mainly care about thinking you like truth" feel the same from the inside as "care about believing truth"? (Thus dissolving the question of distinction.)