One of the central focuses of LW is instrumental rationality. It's been suggested, rather famously, that this isn't about having true beliefs, but rather its about "winning". Systematized winning. True beliefs are often useful to this goal, but an obsession with "truthiness" is seen as counter-productive. The brilliant scientist or philosopher may know the truth, yet be ineffective. This is seen as unacceptable to many who see instrumental rationality as the critical path to achieving one's goals. Should we all discard our philosophical obsession with the truth and become "winners"?


The River Instrumentus

You are leading a group of five people away from deadly threat which is slowly advancing behind you. You come to a river. It looks too dangerous to wade through, but through the spray of the water you see a number of stones. They are dotted across the river in a way that might allow you to cross. However, the five people you are helping are extremely nervous and in order to convince them to cross, you will not only have to show them its possible to cross, you will also need to look calm enough after doing it to convince them that it's safe. All five of them must cross, as they insist on living or dying together.

Just as you are about to step out onto the first stone it splutters and moves in the mist of the spraying water. It looks a little different from the others, now you think about it. After a moment you realise its actually a person, struggling to keep their head above water. Your best guess is that this person would probably drown if they got stepped on by five more people. You think for a moment, and decide that, being a consequentialist concerned primarily with the preservation of life, it is ultimately better that this person dies so the others waiting to cross might live. After all, what is one life compared with five?

However, given your need for calm and the horror of their imminent death at your hands (or feet), you decide it is better not to think of them as a person, and so you instead imagine them being simply a stone. You know you'll have to be really convincingly calm about this, so you look at the top of the head for a full hour until you utterly convince yourself that the shape you see before you is factually indicitative not of a person, but of a stone. In your mind, tops of heads aren't people - now they're stones. This is instrumentally rational - when you weigh things up the self-deception ultimately increases the number of people who will likely live, and there is no specific harm you can identify as a result.

After you have finished convincing yourself you step out onto the per... stone... and start crossing. However, as you step out onto the subsequent stones, you notice they all shift a little under your feet. You look down and see the stones spluttering and struggling. You think to yourself "lucky those stones are stones and not people, otherwise I'd be really upset". You lead the five very greatful people over the stones and across the river. Twenty dead stones drift silently downstream.

When we weigh situations on pure instrumentality, small self deception makes sense. The only problem is, in an ambiguous and complex world, self-deceptions have a notorious way of compounding eachother, and leave a gaping hole for cognitive bias to work its magic. Many false but deeply-held beliefs throughout human history have been quite justifiable on these grounds. Yet when we forget the value of truth, we can be instrumental, but we are not instrumentally rational. Rationality implies, or ought to imply, a value of the truth.


Winning and survival

In the jungle of our evolutionary childhood, humanity formed groups to survive. In these groups there was a hierachy of importance, status and power. Predators, starvation, rival groups and disease all took the weak on a regular basis, but the groups afforded a partial protection. However, a violent or unpleasant death still remained a constant threat. It was of particular threat to the lowest and weakest members of the group. Sometimes these individuals were weak because they were physically weak. However, over time groups that allowed and rewarded things other than physical strength became more successful. In these groups, discussion played a much greater role in power and status. The truely strong individuals, the winners in this new arena were one's that could direct converstation in their favour - conversations about who will do what, about who got what, and about who would be punished for what. Debates were fought with words, but they could end in death all the same.

In this environment, one's social status is intertwined with one's ability to win. In a debate, it was not so much a matter of what was true, but of what facts and beliefs achieved one's goals. Supporting the factual position that suited one's own goals was most important. Even where the stakes where low or irrelevant, it payed to prevail socially, because one's reputation guided others limited cognition about who was best to listen to. Winning didn't mean knowing the most, it meant social victory. So when competition bubbled to the surface, it payed to ignore what one's opponent said and instead focus on appearing superior in any way possible. Sure, truth sometimes helped, but for the charismatic it was strictly optional. Politics was born.

Yet as groups got larger, and as technology began to advance for the first time, there appeared a new phenomenon. Where a group's power dynamics meant that it systematically had false beliefs, it became more likely to fail. The group that believing that fire spirits guided a fire's advancement fared poorly compared with those who checked the wind and planned their means of escape accordingly. The truth finally came into its own. Yet truth, as opposed to simple belief by politics, could not be so easily manipulated for personal gain. The truth had no master. In this way it was both dangerous and liberating. And so slowly but surely the capacity for complex truth-pursuit became evolutionarily impressed upon the human blueprint.

However, in evolutionary terms there was little time for the completion of this new mental state. Some people had it more than others. It also required the right circumstances for it to rise to the forefront of human thought. And other conditions could easily destroy it. For example, should a person's thoughts be primed with an environment of competition, the old ways came bubbling up to the surface. When a person's environment is highly competitive, it reverts to its primitive state. Learning and updating of views becomes increasingly difficult, because to the more primitive aspects of a person's social brain, updating one's views is a social defeat.

When we focus an organisation's culture on winning, there can be many benefits. It can create an air of achievement, to a degree. Hard work and the challenging of norms can be increased. However, we also prime the brain for social conflict. We create an environment where complexity and subtlety in conversation, and consequently in thought, is greatly reduced. In organisations where the goals and means are largely intellectual, a competitive environment creates useless conversations, meaningless debates, pointless tribalism, and little meaningful learning. There are many great examples, but I think you'd be best served watching our elected representatives at work to gain a real insight.


Rationality and truth

Rationality ought to contain an implication of truthfulness. Without it, our little self-deceptions start to gather and compond one another. Slowly but surely, they start to reinforce, join, and form an unbreakable, unchallengable yet utterly false belief system. I need not point out the more obvious examples, for in human society, there are many. To avoid this on LW and elsewhere, truthfulness of belief ought to inform all our rational decisions, methods and goals. Of course true beliefs do not guarantee influence or power or achievement, or anything really. In a world of half-evolved truth-seeking equipment, why would we expect that?  What we can expect is that, if our goals are anything to do with the modern world in all its complexity, the truth isn't sufficient, but it is neccessary.

Instrumental rationality is about achieving one's goals, but in our complex world goals manifest in many ways - and we can never really predict how a false belief will distort our actions to utterly destroy our actual achievements. In the end, without truth, we never really see the stones floating down the river for what they are.

New to LessWrong?

New Comment
30 comments, sorted by Click to highlight new comments since: Today at 1:12 PM

One of the big reasons I view epistemics as fairly important is that being mistaken about your own goals seems both very common and extremely costly. See, for example, happiness research.

[-][anonymous]9y130

I don't think many people hold the view you're attacking. People are very aware of the risks of self-deception. There's a reason it's considered a part of the so-called Dark Arts.

Cheers for comment. I think I perhaps should have made the river self-deception less deliberate, to create a greater link between it and the "winning" mentality. I guess I'm suggesting that there is a little inevitable self-deception incurred in the "systematised winning" and general "truth isn't everything" attitudes that I've run into so far in my LW experience. Several people have straight-up told me truth is only incidental in the common LWers approach to instrumental rationality, though I can see there are a range of views.

[-][anonymous]9y70

The truth indeed is only incidental, pretty much by the definition of instrumental rationality when truth isn't your terminal goal. But surely the vast majority agree that the truth is highly instrumentally valuable for almost all well-behaved goals? Finding out the truth is pretty much a textbook example of an instrumental goal which very diverse intelligences would converge to.

I guess my argument is that when people can't see an immediate utility for the truth, they can become lazy or rationalise that a self-deception is acceptable. This occurs because truth is seen as useful rather than essential or at least essential in all but the most extreme circumstances. I think this approach is present in the "truth isn't everything" interpretation of instrumental rationality. The systematised winning isn't intended to comprise this kind of interpretation, but I think the words it uses evokes too much that's tied into a problematic engagement with the truth. That's where I currently sit on the topic in any case.

[-][anonymous]9y100

Your parable feels forced enough that I feel it detracts from your message. In fact, I think just having your last two paragraphs would get most of the point across without much sacrifice. (Separate comment for different subject)

What part do you think was forced? So far quite a several others said they didn't mind that part so much, and that actually the second section bothered them. I'll probably make future alterations when I have spare time.

I like this post, I like the example, I like the point that science is newer than debate and so we're probably more naturally inclined to debate. I don't like the apparently baseless storytelling.

In the jungle of our evolutionary childhood, humanity formed groups to survive. In these groups there was a hierachy of importance, status and power. Predators, starvation, rival groups and disease all took the weak on a regular basis, but the groups afforded a partial protection. However, a violent or unpleasant death still remained a constant threat. It was of particular threat to the lowest and weakest members of the group. Sometimes these individuals were weak because they were physically weak. However, over time groups that allowed and rewarded things other than physical strength became more successful. In these groups, discussion played a much greater role in power and status. The truely strong individuals, the winners in this new arena were one's that could direct converstation in their favour - conversations about who will do what, about who got what, and about who would be punished for what. Debates were fought with words, but they could end in death all the same.

I don't know much about the environment of evolutionary adaptation, but it sounds like you don't either. Jungle? Didn't we live on the savannah? And forming groups for survival, it seems just as plausible that we formed groups for availability of mates.

If you don't know what the EEA was like, why use it as an example? All you really know is about the modern world. I think reasoning about the modern world makes your point quite well in fact. There are still plenty of people living and dying dependent on their persuasive ability. For example, Adolf Hitler lived while Ernst Rohm died. And we can guess that it's been like this since the beginning of humanity and that this has bred us to have certain behaviors.

I think this reasoning is a lot more reliable, in fact, than imagining what the EEA was like without any education in the subject.

Maybe I'm being pedantic--the middle of the post is structured as a story, a chronology. It definitely reads nicely that way.

Jungle? Didn't we live on the savannah?

It didn't even occur to me to interpret “In the jungle of” literally, to the point that I didn't even notice it contained the word “jungle” until I Ctrl-F'd for it.

The metaphor's going over my head. Don't feel obligated to explain though, I'm only mildly curious. But know that it's not obvious to everyone.

Well that was a straightforward answer.

(I think the last time I heard the word “jungle” used literally to refer to rainforest was probably in Jumanji.)

(last time I heard the word "jungle" was a Peruvian guy saying his dad grew up in the jungle and telling me about Peruvian native marriage traditions)

[-]CCC9y00

Jungle? Didn't we live on the savannah?

I thought it was near the ocean...

Jungle? Didn't we live on the savannah?

LOL it was just a turn of phrase.

And forming groups for survival, it seems just as plausible that we formed groups for availability of mates.

Genetically speaking mate-availability is a component to survival. My understanding of the forces that increased group size is that they are more complex than either of these (big groups win conflicts for terrritory, but food availability (via tool use) and travel speed are limiting factors I believe - big groups only work if you can access a lot of food and move on before stripping the place barren), but I was writing a very short characterisation and I'm happy to acknowledge minor innacuracies. Perhaps I'll think about tightening up the language or removing that part as you suggest - I probably wrote that it far too casually.

For example, Adolf Hitler lived while Ernst Rohm died

Nice example. Although Hitler did die anyway. And I think a decent part of the reason was his inability to reason effectively and make strategically sound decisions. Of course I think most people are kinda glad he was strategically irrational... In any case I think you're right the charisma is still useful but my suggestion is that truth-seeking (science etc) has increased in usefulness over time, whereas charisma is probably roughly the same as it has been for a long time.

structured as a story, a chronology

Perhaps I should make the winning section more storylike to make focus on its point rather than it being a scientific guide to that subtopic. Or maybe I just need to rethink it... The core point seems to have been received well at least.

...my suggestion is that truth-seeking (science etc) has increased in usefulness over time, whereas charisma is probably roughly the same as it has been for a long time.

Yes, and I think it's a good suggestion. I think I can phrase my real objection better now.

My objection is that I don't think this article gives any evidence for that suggestion. The historical storytelling is a nice illustration, but I don't think it's evidence.

I don't think it's evidence because I don't expect evolutionary reasoning at this shallow a depth to produce reliable results. Historical storytelling can justify all sorts of things, and if it justifies your suggestion, that doesn't really mean anything to me.

A link to a more detailed evolutionary argument written by someone else, or even just a link to a Wikipedia article on the general concept, would have changed this. But what's here is just evolutionary/historical storytelling like I've seen justifying all sorts of incorrect conclusions, and the only difference is that I happen to agree with the conclusion.

If you just want to illustrate something that you expect your readers to already believe, this is fine. If you want to convince anybody you'd need a different article.

Cheers now that we've narrowed down our differences that's some really constructive feedback. I think I intended it primarily as a illustration and assume that most people in this context would probably already agree with that perspective, though this could be a bad assumption and it probably makes the argument seem pretty sloppy in any case. It'll definitely need refinement, so thanks.

EDIT> My reply attracted downvotes? Odd.

[-]lmm9y20

There is a cost to false beliefs. We should all know that. But the cost isn't infinite, and sometimes the benefits outweigh them. Self-deception remains a valuable tool to have in your toolbox; when you make the decision you assess the cost as best as you can using available information (like, what're the odds that a bunch of muppets would be standing in a river like that, and not already be certain to drown?).

LOL. Well I agree with your first three sentences, but I'd also add that we systematically the costs of false beliefs because (1) at the point of deception we cannot reliably predict future instances in which the self-deceptive belief will become a premise in a decision (2) in instances where we make a instrumentally poor decision due to a self-deception, we often receive diminished or no feedback (we are unaware of the dead stones floating down the river).

[-]TrE9y20

I'm happy that you adressed this topic. It adresses a certain failure mode about instrumental rationality that may commonly cause high-status people to make poor decisions.

However, I don't think your narrative about human civilization, the birth of politics etc. is actually necessary for your conclusion. I think at best it's dubious as far as historical accuracy goes, and entertaining as a metaphor for the different layers for human interaction with each other and the environment.

The example with the persons' heads, I found much more helpful at understanding your conclusion (which I share in general). It's a good post. If I could suggest a change, I would cut out the social evolution bit and fill it with more examples, counter-examples, and border cases, preferrably taken from the real world.

Thanks for the useful suggestion. This appears to emerging as a consensus. I'll probably either tidy up the second section or cut it when I have time.

Actually, I think you're wrong in thinking that LW doctrine doesn't dictate heightened scrutiny of the deployment of self-deception. At the same time, I think you're wrong to think false beliefs can seldom be quarantined, compartmentalization being a widely employed defense mechanism. (Cf., any liberal theist.)

Everyone feels a tug toward the pure truth, away from pure instrumental rationalism. You're mistake (and LW's), I think, is to incorporate truth into instrumental rationality (without really having a cogent rationale, given the reality of compartmentalization). The real defect in instrumental rationalism is that no person of integrity can take it to heart. "Values" are of two kinds: biological givens and acquired tendencies that restrict the operation of those givens (instinct and restraint). The drive for instrumental rationality is a biological given; epistemic rationality is a restraint intellectuals apply to their instrumental rationality. It is ethical in character, whereas instrumental rationality is not; and it is a seductive confusion to moralize it.

For intellectuals, the businessman's "winner" ethos--the evaluative subordination of epistemic rationality to instrumentality--is an invitation to functional psychopathy.

LW appears to be mixed on the "truthiness should be part of instrumental rationality" issue.

It seems we disagree on the compartmentalising issue. I believe self-deception can't easily be compartmentalised in the way you describe because we can't accurately predict, in most cases, where our self-deception might become a premise in some future piece of reasoning. By its nature, we can't correct at the later date, because we are unaware that our belief is wrong. What's your reasoning regarding compartmentalizing? I'm interested in case I am overlooking something.

You're mistake (and LW's), I think, is to incorporate truth into instrumental rationality

My experience so far is that a large (50%?) part of LW agrees with you not me.

It is ethical in character, whereas instrumental rationality is not; and it is a seductive confusion to moralize it.

This is an interesting argument. In a sense I was treating the ethics as separate in this case. I'd be interested to hear a more detailed version of what you say here.

is an invitation to functional psychopathy.

There's a great quote floating around somewhere about studying the truth vs. creating the truth. I can't remember it specifically enough to find it right now... but yes I agree intellectuals will undermine their abilities if they adopt pure instrumentality.

Well it can't still be instrumental rationality anymore. I mean suppose the value being minimized is overall suffering and you are offered a (non-zero probability one time...and you know there are no other possible infinitary outcomes) threat that if you don't believe some false claim X god will create an infinite (no other infinite outcomes) amount of suffering. You know before the choice to believe the false claim that no effect of believing it will increase expected suffering to overwhelm the harm of not believing it.


But the real rub is what do you say about the situation where the rocks turn out to be rocks cleverly disguised as people. You still have every indication that your behavior convincing yourself is an attempt to believe a false statement but it is actually true.

Does the decision procedure which says whatever you want it to normally say but makes a special exception that you can deceive yourself if (description of this situation which happens to identify it uniquely in the world).

In other words is it a relation to truth that you demand. In which case the rule gets better whenever you make exceptions that happen (no matter how unlikely it is) in the actual world to generate true and instrumentally useful beliefs. Or is it some notion of approaching evidence?

If the latter you seem to be committed to the existence of something like Carnap's logical probability, i.e., something deducible from pure reason that assigns priors to all possible theories of the world. This is a notoriously unsolvable (in the sense that it doesn't have one) unsolvable problem.

At the very least can you state some formal conditions that constrain a rule for deciding between actions (or however you want to model it) that captures the constraint you want?

Thanks for the reply.

If we change the story as you describe I guess the moral of the story would probably become "investigate thoroughly". Obviously Bayesians are never really certain - but deliberate manipulation of one's own map of probabilities is unwise unless there is an overwhelmingly good reason (your hypothetical would probably be one - but I believe in real life we rarely run into that species of situation).

The story itself is not the argument, but an illustration of it - it is "a calculation of the instrumentality of various options ought to include a generalised weighting of the truth (resistence to self-deception) because of the consequences of self-deception tend to be hidden and negative". I additionally feel that this weighting is neglected when the focus in on "winning". I can't prove the emprical part of the first claim, because its based on general life experience, but I don't feel its going to be challenged by any reasonable person (does anyone here think self-deception doesn't generally lead to unseen, negative consequences?).

I don't feel confident precribing a specific formula to quantify that weighting at this time. I'm merely suggesting the weight should be something, and be significant in most situations.

A psychopath would have no problem with this, by the way; he'd just step on the heads of people and be on his merry way, calm as ever.

Not much I can think of that we can do about that, except provide a system with disincentives for harmful behaviour. What can easily correct is the possiblity of meaning well but making mistakes due to self-deception. This post attempts to examine one instance of that.

I echo people's comments about the impropriety of the just-so story.

The analogy is problematic. At best, it proves "there is an possible circumstance where a fairly poorly thought-out instrumentally rational belief is inferior to a true one. Such an example is fundamentally incapable of proving the universal claim that truth is always superior. It's also a bizarre and unrealistic example. On top of that, it actually ends in the optimal outcome.

The actor in the hypothetical likely made the correct utilitarian decision in the terms you assume. The moral thing to do for a drowning person is save them. But if you saved these people, you'd all die anyways. If you don't save them, it seems like they'll almost-drown until they pass out from exhaustion, then drown. Or they'll be killed by the approaching deadly threat. So without more information, there is no realistic possibilitythey survive anyways. This, you actually did the right thing and soared yourself the emotional anguish of making a hard decion.

Thanks for the feedback.

On top of that, it actually ends in the optimal outcome.

Just to clarify, no it doesn't. It's implied that the 20 deaths is worse than 5 for the consequentialist protagonist.

The analogy is problematic ... bizarre and unrealistic example.

Thanks for the feedback. It seems people have been split on this. Others have also found the analogy problematic. On the other hand an analogy doesn't usually attempt to provide proof, but illustrate the structure of an argument in an understandable way. I don't think it's bizarre if you think of thought-experiments like the Chinese Room or Descartes' invisible demon.

. The moral thing to do for a drowning person is save them. But if you saved these people, you'd all die anyways

I think this is a more substantial attack on my writing. I probably need to improve the scenario so that it is clear that the people in the river could have been saved in the later part of the parable. Thanks.

At best, it proves "there is an possible circumstance where a fairly poorly thought-out instrumentally rational belief is inferior to a true one

Well that's roughly right but a bit of limited interpretation. I'd say "truth-belief has a significant instrumental value beyond what we can derive from our immediate circumstances and goals" and additionally "non-truth can have cumulitive negative utility far beyond what immediate circumstances suggest". What I'm trying to suggest is that the uncertainty with which we go through life means that it is rational to assign truth-belief a very significant utlity beyond what can be explicitly identified with our present knowledge. In other words, be sloppy with the truth and you'll never even know you failed. I write this because "rationlity is winning" and the raw math of most people's approach to instrumental rationlity neglects this generalised component entirely, only focusing on how the truth helps us in the here-and-now. I hope I at least partially communicated this in the parable.