Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

-Yvain, "Extreme Rationality: It's Not That Great"

The folks most vocal about loving "truth" are usually selling something. For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth...

The people who just want to know things because they need to make important decisions, in contrast, usually say little about their love of truth; they are too busy trying to figure stuff out.

-Robin Hanson, "Who Loves Truth Most?"

A couple weeks ago, Brienne made a post on Facebook that included this remark: "I've also gained a lot of reverence for the truth, in virtue of the centrality of truth-seeking to the fate of the galaxy." But then she edited to add a footnote to this sentence: "That was the justification my brain originally threw at me, but it doesn't actually quite feel true. There's something more directly responsible for the motivation that I haven't yet identified."

I saw this, and commented:

<puts rubber Robin Hanson mask on>

What we have here is a case of subcultural in-group signaling masquerading as something else. In this case, proclaiming how vitally important truth-seeking is is a mark of your subculture. In reality, the truth is sometimes really important, but sometimes it isn't.

</rubber Robin Hanson mask>

In spite of the distancing pseudo-HTML tags, I actually believe this. When I read some of the more extreme proclamations of the value of truth that float around the rationalist community, I suspect people are doing in-group signaling—or perhaps conflating their own idiosyncratic preferences with rationality. As a mild antidote to this, when you hear someone talking about the value of the truth, try seeing if the statement still makes sense if you replace "truth" with "information."

This standard gives many statements about the value of truth its stamp of approval. After all, information is pretty damn valuable. But statements like "truth seeking is central to the fate of the galaxy" look a bit suspicious. Is information-gathering central to the fate of the galaxy? You could argue that statement is kinda true if you squint at it right, but really it's too general. Surely it's not just any information that's central to shaping the fate of the galaxy, but information about specific subjects, and even then there are tradeoffs to make.

This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism." The "rationalism" branding encourages the meme that truth-seeking is great we should do lots and lots of it because truth is so great. The effective altruism movement, on the other hand, recognizes that while gathering information about the effectiveness of various interventions is important, there are tradeoffs to be made between spending time and money on gathering information vs. just doing whatever currently seems likely to have the greatest direct impact. Recognize information is valuable, but avoid analysis paralysis.

Or, consider statements like:

  • Some truths don't matter much.
  • People often have legitimate reasons for not wanting others to have certain truths.
  • The value of truth often has to be weighed against other goals.

Do these statements sound heretical to you? But what about:

  • Information can be perfectly accurate and also worthless. 
  • People often have legitimate reasons for not wanting other people to gain access to their private information. 
  • A desire for more information often has to be weighed against other goals. 

I struggled to write the first set of statements, though I think they're right on reflection. Why do they sound so much worse than the second set? Because the word "truth" carries powerful emotional connotations that go beyond its literal meaning. This isn't just true for rationalists—there's a reason religions have sayings like, "God is Truth" or "I am the way, the truth, and the life." "God is Facts" or "God is Information" don't work so well.

There's something about "truth"—how it readily acts as an applause light, a sacred value which must not be traded off against anything else. As I type that, a little voice in me protests "but truth really is sacred"... but once we can't say there's some limit to how great truth is, hello affective death spiral.

Consider another quote, from Steven Kaas, that I see frequently referenced on LessWrong: "Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." Interestingly, the original blog included a caveat—"we may have to count everyday social interactions as a partial exception"—which I never see quoted. That aside, the quote has always bugged me. I've never had my tires slashed, but I imagine it ruins your whole day. On the other hand, having less than maximally accurate beliefs about something could ruin your whole day, but it could very easily not, depending on the topic.

Furthermore, sometimes sharing certain information doesn't just have little benefit, it can have substantial costs, or at least substantial risks. It would seriously trivialize Nazi Germany's crimes to compare it to the current US government, but I don't think that means we have to promote maximally accurate beliefs about ourselves to the folks at the NSA. Or, when negotiating over the price of something, are you required to promote maximally accurate beliefs about the highest price you'd be willing to pay, even if the other party isn't willing to reciprocate and may respond by demanding that price?

Private information is usually considered private precisely because it has limited benefit to most people, but sharing it could significantly harm the person whose private information it is. A sensible ethic around information needs to be able to deal with issues like that. It needs to be able to deal with questions like: is this information that is in the public interest to know? And is there a power imbalance involved? My rule of thumb is: secrets kept by the powerful deserve extra scrutiny, but so conversely do their attempts to gather other people's private information. 

"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others. But parallel arguments suggest you should doubt your own justifications for feeling entitled to information others might have legitimate reasons for keeping private. Arguments like, "well truth is supremely valuable," "it's extremely important for me to have accurate beliefs," or "I'm highly rational so people should trust me" just don't cut it.

Finally, being rational in the sense of being well-calibrated doesn't necessarily require making truth-seeking a major priority. Using the evidence you have well doesn't necessarily mean gathering lots of new evidence. Often, the alternative to knowing the truth is not believing falsehood, but admitting you don't know and living with the uncertainty.

New Comment
58 comments, sorted by Click to highlight new comments since:
[-]Louie590

2009: "Extreme Rationality: It's Not That Great"

2010: "Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality"

2013: "How about testing our ideas?"

2014: "Truth: It's Not That Great"

2015: "Meta-Countersignaling Equilibria Drift: Can We Accelerate It?"

2016: "In Defense Of Putting Babies In Wood Chippers"

2016: "In Defense Of Putting Babies In Wood Chippers"

Heck, I could write that post right now. But what's it got to do with truth and such?

I think it got something to do with countersignaling and being contrarian.

I read the "heretical" statements as talking about truth replacing falsehood. I read the non-heretical statements as talking about truth replacing ignorance. If you reword the "truth" statements to make it clear that the alternative is not falsehood, they would sound much less heretical to me.

This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism".

Huh? What? Wait a moment....

These two are entirely different things. Under the local definitions, rationalism is making sure the map looks like the territory and doing stuff which will actually advance your goals. Notably, rationalism is silent about values -- it's perfectly possibly to be a rational Nazi. You can crudely define rationalism as "being grounded in reality".

Altruism, on the other hand, is all about values. A very specific set of values.

You can't "rebrand" a movement that way -- what you imply is a wholesale substitution of one movement with another.

Altruism, on the other hand, is all about values.

We are speaking about effective altruism not altruism in general.

In practice there seems to be quite an overlap between the EA and the LW crowd and there are people deciding whether to hold EA or LW meetups.

Just because there's an overlap doesn't mean that LW should be rebranded as EA.

We are speaking about effective altruism not altruism in general.

Effective altruism is a subtype of altruism.

there seems to be quite an overlap between the EA and the LW crowd

There is also an overlap between neoreactionaries and the LW crowd. So?

There is also an overlap between neoreactionaries and the LW crowd. So?

There's only a few percent neoreactionaries and I have yet to hear that people are seriously considered whether to run a neoreactionary or an LW meetup.

Specifically, according to the 2013 survey, 2.4% of LW identifies as neoreactionary, while 28.6% identifies as effective altruist. The "reactionary" option is buried in a second-tier politics question, so I suspect it's underrepresenting LWers with neoreactionary sympathies, but I'd still be surprised if we were looking at more than single digits.

[-][anonymous]50

Specifically, according to the 2013 survey, 2.4% of LW identifies as neoreactionary,

Admittedly, I'd bet this is higher than the rate among the general population, if only because LW-ers are more likely to have heard of obscure ideologies at all.

Probably. LW wasn't where I met my first neoreactionary, but it was where I met my second through my fifth.

It also draws on a similar demographic: disaffected mostly-young mostly-nerds with a distrust of conventional academia and a willingness to try unusual things to solve problems.

[-][anonymous]50

In defense of distrusting conventional academia, I currently work in conventional academia, and it has plenty of genuine problems above and beyond the mere fact that someone on the internet might have some hurt feelings about not fitting in at graduate school (or some secret long-held resentment about taking a lucrative industry job instead of martyring themselves to the idol of Intellect by... going to grad-school).

I still trust a replicated scientific study more than most other things, but I don't necessarily trust academia anymore to have done the right studies in the first place, and I have to remind myself that studies can only allocate belief-mass between currently salient hypotheses.

Oh, I'm not saying it's a bad thing. I am after all such a mostly-young mostly-nerd.

You seemed be saying that conventional academia doesn't do well by absolute standards, but that doesn't mean anyone else is doing better relatively.

[-][anonymous]50

Well yes, and that makes sense: conventional academia is one of the only organized efforts to do well at all.

while 28.6% identifies as effective altruist.

So, getting back to the original issue, does it look reasonable to "rebrand a movement" if somewhat less than a third of it identifies itself with a new brand?

Wasn't trying to stake out a claim there. Since you ask, though, I'd expect under half of LW contributors to identity as rationalists in the sense of belonging to a movement, and I wouldn't be surprised if those people were also more likely to identify as effective altruists.

The survey unfortunately doesn't give us the tools to prove this directly, but we probably could correlate between meetup attendance and EA identification.

I'd expect under half of LW contributors to identity as rationalists in the sense of belonging to a movement

A good point. And speaking of, why did this whole idea of LW being a "movement" pop up?

LW is a movement like Something Awful is a movement. At least the Goonies used to be able to whistle up large fleets in Eve... X-D

And speaking of, why did this whole idea of LW being a "movement" pop up?

I imagine the Craft and the Community sequence has something to do with it.

The whole point of rebranding is that normally before you rebrand nobody identifies with the new brand.

This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism".

I'm fairly certain ChrisHallquist isn't suggesting we re-brand rationality 'effective altruism', otherwise I'd agree with you.

As far as I can tell he was talking about the kinds of virtues people associate with those brands (notably 'being effective' for EA and 'truth-seeking' for rationalism) and suggesting that the branding of EA is better because the virtue associated with it is always virtuous when it comes to actually doing things, whereas truth-seeking leads to (as he says) analysis paralysis.

the kinds of virtues people associate with those brands (notably 'being effective' for EA and 'truth-seeking' for rationalism) and suggesting that the branding of EA is better because the virtue associated with it is always virtuous when it comes to actually doing things,

The virtue of "being effective" is not always virtuous unless you're willing to see virtue in constructing effective baby-mulching machines...

I think we’re using different definitions of virtue. Whereas I’m using the definition of virtue as a a good or useful quality of a thing, you’re taking it to mean a behavior showing high moral standards. I don’t think anyone would argue that the 12 virtues of rationality are moral, but it is still a reasonable use of English to describe them as virtues.

Just to be clear: The argument I am asserting is that ChrisHallquist is not in any way suggesting that we should rename rationality as effective altruism.

I hope this makes my previous comment clearer :)

I was not signaling. Making it a footnote instead of just editing it outright was signaling. Revering truth, and stating that I do so, was not.

Now that I've introspected some more, I notice that my inclination to prioritize the accuracy of information I attend to above its competing features comes from the slow accumulation of evidence that excellent practical epistemology is the the strongest possible foundation for instrumental success. To be perfectly honest, deep down, my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".

I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques that I double-thought myself into believing accuracy was not so great.

But I was wrong, you see. Having accurate beliefs is a ridiculously convergent incentive, so whatever my goal structure, it was only a matter of time before I'd recognize that. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.

Recognizing only abstractly that map-territory correspondence is useful does not produce the same results. Cultivating a deep dedication to ensuring every motion precisely engages reality with unfailing authenticity prevents real-world mistakes that noting the utility of information, just sort of in passing, will miss.

For some people, dedication to epistemic rationality may most effectively manifest as excitement or simply diligence. For me, it is reverence. Reverence works in my psychology better than anything else. So I revere the truth. Not for the sake of the people watching me do so, but for the sake of accomplishing whatever it is I happen to want to accomplish.

"Being truth-seeking" does not mean "wanting to know ALL THE THINGS". It means exhibiting patters of thought and behavior that consistently increase calibration. I daresay that is, in fact, necessary for being well-calibrated.

...my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".

Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you're imitating them on are the cause of their success? Are the people you're imitating more successful than other people who don't do those things, but who you don't interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?

It is indeed a cue to look for motivated reasoning. I am not neglecting to do that. I have scrutinized extensively. It is possible to be motivated by very simple emotions while constraining the actions you take to the set endorsed by deliberative reasoning.

The observation that something fits the status-seeking patterns you've cached is not strong evidence that nothing else is going on. If you can write off everything anybody does by saying "status" and "signaling" without making predictions about their future behavior--or even looking into their past behavior to see whether they usually fit the patterns-- then you're trapped in a paradigm that's only good for protecting your current set of beliefs.

Yes, I do have good reasons to think the things I'm imitating are causes of their success. Yes, they're more successful on average than people who don't do the things, and indeed I think they're probably more successful with respect to my values than literally everybody who doesn't do the things. And I don't "happen" to be in close proximity to them; I sought them out and became close to them specifically so I could learn from them more efficiently.

I am annoyed by vague, fully general criticisms that don't engage meaningfully with any of my arguments or musings, let alone steel man them.

[-]Jack100

Truth-telling seems clearly overrated (by people on Less Wrong but also pretty much everyone else). Truth-telling (by which I mean not just not-lying but going out of your way and sacrificing your mood, reputation or pleasant socializing just to say something true) is largely indistinguishable from "repeating things you heard once to signal how smart or brave or good you are. "

Truth-seeking as in observing and doing experiments to discover the structure of the universe and our society still seems incredibly important (modulo the fact that obviously there are all sorts of truths that aren't actually significant). And I actually think that is true even if you call it information gathering, though 'information gathering' is certainly vastly less poetic and lacks the affective valence of Truth.

'information gathering' is certainly vastly less poetic and lacks the affective valence of Truth.

"Information gathering" also suggests a stroll in the park, gathering up the information that is just lying around. Getting at the truth is generally harder than that.

There's simply more territory than our maps can encompass. If you want to get anything done, there comes a point when you have to act on information that's not ideally complete. What's more, you aren't going to have complete information about when to stop searching.

Any recommendations for discussions of this problem?

"Effective altruism" is at risk of turning into a signal, though perhaps not quite as quickly are "rationality" or "truth-seeking".

"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others.

You should even more doubt your motivations for deceiving yourself.

Edit: For the following clicking agree is supposed to mean that you consider a statement heretical

"Some truths don't matter much." sounds heretical [pollid:691]

"People often have legitimate reasons for not wanting others to have certain truths." sounds heretical [pollid:692]

"The value of truth often has to be weighed against other goals." sounds heretical [pollid:693]

"Information can be perfectly accurate and also worthless." sounds heretical [pollid:694]

"People often have legitimate reasons for not wanting other people to gain access to their private information. " sounds heretical [pollid:695]

"A desire for more information often has to be weighed against other goals." sounds heretical [pollid:696]

[-]Jack40

What is meant by heretical?

I personally simply copied the wording in the article above and wanted to test whether the claim is true. It seems indeed to be the case that there are a bunch of people who consider the statement about "truth" more heretical than "information".

I don't know how ChristianKl meant it, but in general it appears to mean either (1) "this idea is so utterly false that it must be strenuously opposed every time it rears its head", or (2) "the crowd say that this idea is so utterly false that it must be opposed every time it rears its head, therefore I shall defiantly proclaim it to demonstrate my superior intellect".

The very concept of "heresy" presupposes that arguments are soldiers and disagreement is strife. "Heresy" is a call to war, not a call to truth.

So if we have a heresy, then exposing it as actually true would be good, because we want to know the truth - hang on.

The first and third ones, about info sometimes being worthless, just made me think of Vaniver's article on value of information calculations. So, I mean, it sounded very LessWrongy to me, very much the kind of thing you'd hear here.

The second one made me think of nuclear secrets, which made me think of HPMOR. Again, it seems like the kind of thing that this community would recognize the value of.

I think my reactions to these were biased, though, by being told how I was expected to feel about them. I always like to subvert that, and feel a little proud of myself when what I'm reading fails to describe me.

i'm into epistemic rationality, but this all seems pretty much accurate and stuff

not sure what to conclude from having that reaction to this post.

[-]satt10

My attempt to boil the post down to a one sentence conclusion: being super into epistemic rationality is a very good thing, but it is not the only good thing.

oh, sure

[-]satt20

I'd like to think it was always obvious, but it's often worth explicitly spelling out things that ought to be obvious.

Seems fairly uncontroversial to me, but that's likely because it stays far-mode. If you get specific and near-mode, I suspect you'll stir up some disagreement. Leave aside which beliefs you'd rather other people have or not have - that's a separate dark arts topic. For your own goal-achievement-ability, which true beliefs are you better off not having?

I completely agree that I have limited resources and need to prioritize which beliefs are important enough to spend resources on. I far less agree that true beliefs (in the paying rent sense of the word, those which have correct conditional probability assignments to your potential actions) ever have negative value.

[-]satt00

For your own goal-achievement-ability, which true beliefs are you better off not having?

Nick Bostrom suggests some examples in his "Information Hazards" paper.

[-][anonymous]00

That wasn't nearly as exciting as it sounded.

Good stuff. It took me quite a long time to work these ideas out for myself. There are also situations in which it can be beneficial to let somewhat obvious non-truths continue existing.

Example: your boss is good at doing something but his theoretical explanation for why it works is nonsense. Most of the time questioning the theory is only likely to piss them off, and unless you can replace it with something better, keeping your mouth shut is probably the safest option.

Relevant post:

http://cognitiveengineer.blogspot.com/2013/06/when-truth-isnt-enough.html

What happens when you try to replicate what your boss is doing? For example when you decide to start your own competing company.

Then I suspect it would be useful to know the truths like "my boss always says X, but really does Y when this situation happens", so that when the situation happens, you remember to do Y instead of X. Even if for an employee, saying "you always say X, but you actually do Y" to your boss would be dangerous.

So, some truths may be good to know, while dangerous to talk about in front of people who have a negative reaction to hearing them. You may remember that "X" is the proper thing to say to your boss, and silently remember that "Y" is the thing that probably contributes to the success in the position of your boss.

Replacing your boss is not the only situation where knowing the true boss-algorithm is useful. For example knowing the true mechanism how your boss decides who will get bonus and who will get fired.

Truth is really important sometimes, but so far I've been bad about identifying when.

I know a fair bit about cognitive biases and ideal probabilistic reasoning, and I'm pretty good at applying it to scientific papers that I read or that people link through Facebook. But these applications are usually not important.

But, when it comes to my schoolwork and personal relationships, I commit the planning fallacy routinely, and make bad predictions against base rates. And I spend no time analyzing these kinds of mistakes or applying what I know about biases and probability theory.

If I really operationalized my belief that only some truths are important, I'd prioritize truths and apply my rationality knowledge to the top priorities. That would be awesome.

I am curious; what is the general LessWrong philosophy about what truth "is"? Personally I so far lean towards accepting an operational subjective Bayesian definition, i.e. the truth of a statement is defined only so far as we agree on some (in principle) operational procedure for determining its truth; that is we have to agree on what observations make it true or false.

For example "it will rain in Melbourne tomorrow" is true if we see it raining in Melbourne tomorrow (trivial, but also means that the truth of the statement doesn't depend on rain being "real", or just a construction of Descartes' evil demon or the matrix, or a dream, or even a hallucination). It is also a bit disturbing because the truth of "the local speed of light is a constant in all reference frames" can never be determined in such a way. We could go to something like Popper's truthlikeness, but then standard Bayesianism gets very confusing, since we then have to worry about the probability that a statement has a certain level of "truthlikeness", which is a little mysterious. Truthlikeness is nice in how it relates to the map-territory analogy though.

I am inclined to think that standard Bayesian style statements about operationally-defined things based on our "maps" makes sense, i.e. "If I go and measure how long it takes light to travel from the Earth to Mars, the result will be proportional to c" (with this being influenced by the abstraction that is general relativity), but it still remains unclear to me precisely what this means, in terms of Bayes theorem that is: i.e. the probability P("measure c" | "general relativity") implies that P("general relativity") makes sense somehow, though the operational criteria cannot be where its meaning comes from. In addition we must somehow account for that fact "general relativity" is strictly False, in the "all models are wrong" sense, so we need to somehow rejig that proposition into something that might actually be true, since it makes no sense to condition our beliefs on things we know to be false.

I suppose we might be able to imagine some kind of super-representation theorem, in the style of de-Finetti, in which we show that degrees of belief in operational statements can be represented as the model average of the predictions from all computable theories, hoping to provide an operational basis for Solomonoff induction, but actually I am still not 100% sure what de-Finetti's usual representation theorem really means. We can behave "as if" we had degrees of belief in these models weighted by some prior? Huh? Does this mean we don't really have such degrees of belief in models but they are a convenient fiction? I am very unclear on the interpretation here.

The map-territory analogy does seem correct to me, but I find it hard to reconstruct ordinary Bayesian-style statements via this kind of thinking...

I am curious; what is the general LessWrong philosophy about what truth "is"?

To the extend that there a general philosophy it's http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/ but individual people might differ slightly.

Hmm, thanks. Seems similar to my description above, though as far as I can tell it doesn't deal with my criticisms. It is rather evasive when it comes to the question of what status models have in Bayesian calculations.

Truth, having a one-to-one correspondence between the map and the territory[1], is only useful if you're able to accurately navigate an accurate map.

However, if, when navigating an accurate map, you still veer to the left when trying to reach your destination, you're faced with two choices: 1) Un-value truth, and use whatever map gets you to your destination no matter the relation between the "map" and the territory. 2) Terminally value truth, damn the disutility of doing so!

[1] For convenience, I assume that the territory exists. (For some definitions of existence.)

What would cause you to veer? Bias? Akrasia?

And what would bring you back on track? Wholesale disdain for truth? Or a careful selection of useful lies?

Suppose person A is always 5 minutes late to every appointment. Someone secretly adjusts A's watch to compensate for this, and now person A is always on-time.

Now, A is being fed misinformation continuously (the watch is never correct!), and yet, judging by behavior, A is extremely competent in navigating the world.

(Since "every truth is connected", there is a problem with person B asking A for the time and so on, but suppose A uses the clock on the cellphone to synchronize the time against everyone else.)

[-]keen00

Human brains do experience in-group reinforcement, so we ought to aim that reinforcement at something like truth-seeking, which tends to encourage meta-level discussions like this one, thus helping to immunize us against death spirals. Note that this requires additional techniques of rationality to be effective. Consider that some truths--like knowing about biases--will hurt most people.

But that's just your opinion.

The claims about .truth mostly looked ambiguous to me . There are differences between truth-telling ,truth-preferring and truth-seeking...which can pull in different directions. "Know all, say nowt"

[-][anonymous]-20

Truth is completely and utterly worthless, except for its being instrumentally useful for every single thing ever.

THAT IS THE TRUTH OF THIS WORLD! SUBMIT TO THAT TRUTH, YOU PIGS IN HUMAN CLOTHING!

(There, threw in some pointless signalling.)

Your post seem to still assume that "rationalism" actually has something to do with seeking truth, rather than with seeking, for example, self congratulation about truth.

To give a concrete example, there's a lot of truths to learn in physics.

A lot of those truths are in the physics textbooks and the rest require very hard work to figure out. So, when someone's interested in knowing truths that have to do with physics, they learn the math, they study physics, they end up able to answer questions from the homework section of a textbook, and sometimes, even able to answer the unanswered questions. You get a fairly reasonable island of knowledge, not a protruding rock of "MWI is true" in the vast sea of near total ignorance.