New to LessWrong?

New Comment
69 comments, sorted by Click to highlight new comments since: Today at 10:32 AM

Holy shit.

We evolved to make sense of this nonlinear and unpredictable world with stories. These stories are often very powerful. On one hand the work of Kahneman et al on ‘irrationality’ has given an exaggerated impression. The fact that we did not evolve to think as natural Bayesians does not make us as ‘irrational’ as some argue. We evolved to avoid disasters where the probability of disaster X happening was unknowable but the outcome was fatal. Rationality is more than ‘Bayesian updating’. On the other hand our stories do often obscure the branching histories of reality and they remain the primary way in which history is told. The mathematical models that illuminate complex reality in the physical sciences do not help us much with history yet. Only recently has reliable data science begun to play an important role in politics.

This was not the content of an article I expected to be written by the mind behind Brexit.

And

Generally the better educated are more prone to irrational political opinions and political hysteria than the worse educated far from power. Why? In the field of political opinion they are more driven by fashion, a gang mentality, and the desire to pose about moral and political questions all of which exacerbate cognitive biases, encourage groupthink, and reduce accuracy. Those on average incomes are less likely to express political views to send signals; political views are much less important for signalling to one’s immediate in-group when you are on 20k a year. The former tend to see such questions in more general and abstract terms, and are more insulated from immediate worries about money. The latter tend to see such questions in more concrete and specific terms and ask ‘how does this affect me?’. The former live amid the emotional waves that ripple around powerful and tightly linked self-reinforcing networks. These waves rarely permeate the barrier around insiders and touch others.

Something for LWers to think about. Being smart can make you more susceptible to some biases.

Being smart can make you more susceptible to some biases.

Agree but Dominic is making a much stronger claim in this excerpt, and I wish he would provide more evidence. It is a big claim that

  • the more educated are prone to irrational political opinions
  • average incomes are less likely to express political opinions to send signals.

These are great anecdotes but have there been any studies indicating a link between social status and willingness to express political views?

the more educated are prone to irrational political opinions

I'm quite sure that this is wrong actually - that more educated folks still have better opinions about policy, but only weakly so. Bryan Caplan has pointed this out in his work on the irrationality of the common voter. It becomes right though when you control for rational judgment in private, non-political contexts - education greatly improves that and you would expect it to have the same effect in political judgments. But it doesn't, really.

average incomes are less likely to express political opinions to send signals.

It's often pointed out that lower-income folks tend to be politically apathetic, so to the extent that they do have opinions on policy you would expect these to be less influenced by signaling dynamics. But signaling is not the only source of error (involving both random noise and persistent bias) in political judgments!

more educated folks still have better opinions

In many cases yes I agree.

He argues that very few people, educated or not, actually have any strong factual or logical basis for their opinions.

But I think the more important point is that educated people do have some specific failure modes. From other sources

  • Hate to admit they are wrong

  • Over-complicate things

  • Tend to privilege theory over observation and simple heuristics.

  • Focus on being right versus winning.

  • Deny the existence of things they don't understand.

  • Fail to communicate with people of average intelligence / typical mind fallacy.

It seems like Brexit was basically a small group of rationalists hijacking history. Remain was overwhelmingly likely without the competence of the leave campaign. Pretty impressive.

Of course I'm sure there's another side to this story, so take it with a pinch of salt.

They are rationalists in the sense that they won through the application of their intellect. I don't think that they were rationalists in the sense that they want to raise to rationality waterline.

I was hoping for the rational reason why Britain should have left (I'm only a portion of the way through ), instead of the lies about spending the money currently given to the EU on the NHS (which they had no way of influencing and made no plan for the current projects that would lose EU funding).

They are rationalists in the sense that they won through the application of their intellect. I don't think that they were rationalists in the sense that they want to raise to rationality waterline.

While we shouldn't get hung up on definitions, I'm pretty sure the most common meaning of "rationalist" in this community is the former, not the latter.

Things may have changed in 8 years. I'm not sure if you noticed that I got the phrase, "raising the sanity waterline," from this sites founder.

I'm a bit sad that this has been lost if it has.

He says "rationalists have a lot of work to do", but I don't think that implies "people who don't want to do this work are not rationalists".

If someone uses their brainpower to reliably win, but isn't interested in helping others do the same, I think you could say something like "they are rationalists, but not our kind of rationalists". That would be totally reasonable. But I don't think you could say "they aren't rationalists". The argument that exploiting others' irrationality is net bad in the long run, is fairly specific and not obviously true for all sets of values and beliefs.

(This is a separate question from whether Cummings and Vote Leave are such people.)

If someone uses their brainpower to reliably win, but isn't interested in helping others do the same, I think you could say something like "they are rationalists, but not our kind of rationalists".

I'm not sure this is even reasonable. There's a quiet majority of people on this site and other rationality blogs and in the real world (including Dominic Cummings, apparently) who learn these techniques and use their rationalist knowledge to "win." And they don't give back, other than their actions on the world stage. And personally, I think that's okay. Not everyone needs to take on the role of teacher.

FWIW I agree with this, but it wasn't necessary to the point I was making and I didn't feel like defending it.

I consider what constitutes a modern rationalist up for our own definition. My wife gets annoyed that LW style rationalists aren't like philosophical rationalists.

I don't know of any other community apart from this one that uses rationalist to mean someone who uses their brainpower to reliably win.

Cummings probably doesn't consider himself a rationalist (he used the term pejoratively in the article). So I considered The_Jaded_Ones comment describing them as rationalist as being akin to saying they were part of the in group, someone to be admired/emulated.

I'm an uneasy sometime member or the "rationality" community, I've been to a few LW meetups. So I'm interested in what people mean when they say someone is a rationalist. Is that the sort of person I will be hanging out with if I go again?

So, the thing I want to emphasize is that the community is not about the community. Other communities are about nothing more than themselves, and that's fine, but this community has purpose. We can't define a rationalist as being a member of the community, or that purpose gets lost.

So we have to be able to ask whether someone is a rationalist without talking about the community. We might decide later that yes, they're a rationalist, but all the same we don't think they're a good person and we don't want them in the community; but those have to be separate questions.

(Other communities use the word "rationalist" differently to us, and that's fine too. I don't claim there's an objective definition of the word. Just that we need to use a definition that doesn't talk about us.)

Similarly, rationality can't be about rationality. If the goal of rationality is merely to spread rationality, then rationality might as well be herpes. If the goal is "to win, and also to spread rationality", you have to ask what if those two goals conflict? Maybe you go for something like "the goal of rationality is to win, conditional that part of winning means spreading rationality", but that seems like an unnatural carving of concept-space. And I question what the point is; if the point is simply that there are certain types of people who win but who we don't like very much, then we're going about it wrong. Instead of excluding them from the definition of "rationalists", we can just exclude them from the community.

All that said, I personally wouldn't exclude Cummings from the community, not based on this post. I don't think you're likely to meet many people like him at LW meetups, but as far as I'm concerned he'd be welcome at the London group.

And I question what the point is; if the point is simply that there are certain types of people who win but who we don't like very much, then we're going about it wrong. Instead of excluding them from the definition of "rationalists", we can just exclude them from the community.

But the community is defined as a rationalist community right? Not a specific type of rationalist just plain simple rationalist. If we can't explain why some people would be excluded from it, then the community seems ill defined and likely to drift and fall apart. Why do we even have one?

We could define rationalists as any of the following

  • people who want to use their brain meats to make humanity win (i.e. not lose and go extinct)
  • people who want to have correct and useful beliefs about the world and spread those beliefs and the methods of generating those beliefs aka epistemic rationalists.

Either of those would fit a large part of the people on lesswrong and capture bits of the the spirit of CFAR.

A community that is truly only about "people that use their brain to win" has very little useful to say to each other. Under many goal/belief systems I should hide my goal and beliefs so that people can't interfere with them. I should actively mess up other peoples goal and belief systems so that they are ineffectual agents.

You could for example use user research and marketing to generate highly persuasive materials to convince people to join an evangelical church and get lots of money from those people. If your goal was simply to get lots of money would you count as a rationalist?

I think communities are always ill-defined, and just because we're a rationalist community doesn't mean we have to include every rationalist. We don't need a formal account of who is and isn't welcome.

We already have some definition, "rationalist". I think that definition isn't very good at letting people know in advance who they will be interacting with and helping. We could improve that definition without making it too formal.

If the point of "rationality" is evangelism, count me out. But anyway if you want to point to EY quotes, then consider "rationalists win" or the 12 virtues of rationality (which are about winning, not evangelizing).

It is not about evangelising for me. It is about not using tool sets that rely on other people being irrational. If your incentives are to keep people uninformed so that they will do what you want and you "win" then you are reinforcing the status quo of a world of misinformation/fraud and spin. This I think will cause us all to lose long term.

If you read the whole thing (quite an ask, I know!) then Cummings does go into how he thinks we can fix politics.

He also gives his argument as to why leave was the right choice, but that section is fairly brief.

lies about spending the money currently given to the EU on the NHS

What they said is that in the longer run, money that used to go to the EU could be redirected to domestic priorities, including the NHS. And many current destinations of "EU funding" are quite silly indeed - do you think paying wealthy English landowners to mismanage their land is a good use of funding, whether "EU" or otherwise?

I'm cynical enough to think that big landowners will still get paid to mismanage their land. They managed to get the EU to do it, I suspect they'll manage to get Britain outside the EU to do it.

I'm intrigued to find out Cumming's solution to the political classes, I've not found it in all the verbiage yet though.

ctrl+f "Why do it?" and "The political media and how to improve it" in the article

I think Cummings wants to "raise the sanity waterline." But rather than argue about that, I think a better definition of "rationalist" is someone who writes about how to think and how to win, particularly in a way comprehensible to LW. He certainly fits that definition.

(I would like to exclude Scott Adams who claims to write about these subjects, and from whom I do learn, but who does not write precisely.)

Maybe Leave won regardless of or even despite my ideas. Maybe I’m fooling myself like Cameron. Some of my arguments below have as good an empirical support as is possible in politics (i.e. not very good objectively) but most of them do not even have that. Also, it is clear that almost nobody agrees with me about some of my general ideas. It is more likely that I am wrong than 99% of people who work in this field professionally.

He himself warns not to be construed as too influential. In this case the Scott's caveat apply: elections that are won by slim margin don't say much of significance.

His argument is that although Leave won by a small majority, it should have lost by a very large majority (for various reasons, particularly that the status quo has an advantage in these things) and that that is the large difference we should be thinking about.

I'm pretty sure that in Trump vs. Clinton, Clinton would have won by a large majority if Trump didn't campaign. But it would be silly to say "Trump should have lost by a large majority" on that basis.

Saying "one side should have lost because of X" implies that X has outsized effect on one side compared to the other. But telling political stories is, like campaigning, something that both sides do and which they pretty much have to do to have a reasonable chance at winning.

I think the comparison in the case of Cummings and Brexit is to what other pro-leave campaigns would have done, rsther than to no campaign at all.

The point is that saying "they wouldn't have won if they didn't do X", in a context where you are trying to say something useful, implies that X is some special thing that was only done by them, not that X is something that everyone does. Nobody says "Trump would have lost if he had failed to breathe", because everyone running a campaign needs to breathe and saying that you don't win if you don't breathe is obvious, trivial, and tells you nothing special about Trump.

And "the pro-Brexit campaign did special things which the anti-Brexit campaign did not also do" has not been well-supported here.

Well according to the article, he and his team did do special things. Of course you may not believe that, but he presents a plausible narrative.

Clinton would have won by a large majority if Trump didn't campaign

I wonder what would have happened if Trump had run a very boring, straight-laced campaign though?

I think it's fair to argue that elections that are won by a slim margin don't say much of significance about discrete narrative changes in the weeks leading up to the election. That could be false though, if for example we view Trump winning the election as a 'treatment' effect, which gives him a new discrete ability to change the narrative.

But more generally, I think an election such as Brexit does give us a significant story, not necessarily for the week leading up to it, but for the changing preferences of a population in the year or two leading up to it and the invocation of the election itself.

An argument for embracing, not avoiding "mind killing" politics?

The problem with politics is that discussion of it tends to devolve into something that's a toxic mess that serves no useful purpose, doesn't inform anyone and doesn't make the site better.

Sure, there are benefits to be had from discussing politics on a rationality site, but I can see the argument against it: previous attempts have devolved into the toxic mess instead of yielding any insight.

This thread seems to not fit that pattern. The only annoying content is related to moderation.

This thread doesn't fit that pattern largely because LW users are aware of the problems with talking about politics and are more likely to stay on the meta-level as a response to that. There is, in fact, not a single argument for/against brexit in this thread, which I think is a shining advertisement for LW comment culture. On the other hand, I think this article is also particularly well-suited for not immediately inspiring object-level argument, at least as long as it's not posted on /r/news or similar.

Part of the reason is also because this is a UK issue and most LessWrong readers are not from there, so people have a little bit more of a outsider's or non-tribalist perspective on it (although almost all LW commenters would certainly have voted for Remain).

Yeah, I mean I think there are successes and failures, and I personally think that LW should try to talk more about "real" issues like politics.

This was not the content of an article I expected to be written by the mind behind Brexit.

Why? Rationalists are more likely to embrace weird or counter intuitive positions supported by chains of reasoning. I don't mean this as a bad thing. I would think the probability of a rationalist being behind a weird and unconventional position is higher than baseline.

Right, and he addresses this in the article:

This lack of motivation is connected to another important psychology – the willingness to fail conventionally. Most people in politics are, whether they know it or not, much more comfortable with failing conventionally than risking the social stigma of behaving unconventionally. They did not mind losing so much as being embarrassed, as standing out from the crowd. (The same phenomenon explains why the vast majority of active fund management destroys wealth and nobody learns from this fact repeated every year.)

We plebs can draw a distinction between belief and action, but political operatives like him can't. For "failing conventionally", read "supporting the elite consensus".

Now, 'rationalists', at least in the LW sense (as opposed to the broader sense of Kahneman et al.), have a vague sense that this is true, although I'm not sure if it's been elaborated on yet. "People are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing" (e.g. "political actors are more interested in going through the conventional symbolic motions of working out which side they ought to be on than in actually working it out") is widespread enough in the community that it's been blamed for the failure of MetaMed. (Reading that post, it sounds to me like it failed because it didn't have enough sales/marketing talent, but that's beside the point.)

Something worth noting: the alternate take on this is that, while most people are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing, conventional symbolic motions are still usually good enough. Sometimes they aren't, but usually they are -- which allows the Burkean reading that the conventional symbolic motions have actually been selected for effectiveness to an extent that may surprise the typical LW reader.

It should also be pointed out that, while we praise people or institutions that behave unconventionally to try to win when it works (e.g. Eliezer promoting AI safety by writing Harry Potter fanfiction, the Trump campaign), we don't really blame people or institutions that behave conventionally and lose. So going through the motions could be modeled purely by calculation of risk, at least in the political case: if you win, you win, but if you support an insurgency and lose, that's a much bigger deal than if you support the consensus and lose -- at least for the right definition of 'consensus'. But that can't be a complete account of it, because MetaMed.

We evolved to avoid disasters where the probability of disaster X happening was unknowable but the outcome was fatal.

Most definitely not.
If the probability of something is unknowable, we die. We might avoid things that we don't know how to calculate exactly, so we buffer with loss aversion. But we most definitely do not have a grasp on ungraspable things.
There's a big difference between 'unknowable' and 'unknowable with precision'.

I think that's what he's trying to say. We evolved to be risk averse and specifically to avoid things that sounded really bad even if we didn't know how common they were.

I don't think he's saying that we evolved to avoid disasters that we couldn't possibly see coming. Because we clearly didn't.

But it is not clear at all why stories do not approximate Bayesian updating. Stories do allow us to reach far into the void of space which cannot be mapped immediately from sensory data, but stories also mutate and get forgotten based how useful they are which at least resembles Bayesian updating. The question is whether this kind of filtering throws off the approximation so far that it is qualitatively a different computation.

I don't think we can say that the mutation or loss of stories is very close to Bayesian updating. It may be a form of natural selection, and maybe sometimes the trait being selected for is "truth", but very often it's going to be something other than truth. Memes mutate in order to be more viral, and may lose truth on the way.

Stories about big, shocking, horrible events are more memetically contagious and will thus look more probable, if you're assuming that their memetic availability reflects their likelihood.

Even if stories are selected for plausibility, truth and whatever else leads most directly to maximal reward only once in a while, that would probably still be equivalent to Bayesian updating, just interfered by an enormous amount of noise.

Natural selection is Bayesian updating too: http://math.ucr.edu/home/baez/information/information_geometry_8.html

I don't think you can justify using the word "equivalent" like that. I think maybe you mean "evolution and memetics are similar to Bayesian updating in some ways". That is not the same thing as "equivalence". It is not really helpful to take a very specific thing and say that it is "equivalent" to other very very different things, especially if such a comparison does not help you make any predictions.

My culture has a story in it that the Creator of the Universe is going to come down in the form of a man and destroy the world if people do too many things that are said to be bad by a certain book. There is no plausible way in which the process by which this meme has propagated can be explained by Bayesian updating on truth value.

I didn't mean 'similar'. I meant that it is equivalent to Bayesian updating with a lot of noise. The great thing about recursive Bayesian state estimation is that it can recover from noise by processing more data. Because of this, noisy Bayes is a strict subset of noise-free Bayes, meaning pure rationality is basically noise-free Bayesian updating. That idea contradicts the linked article claiming that rationality is somehow more than that.

There is no plausible way in which the process by which this meme has propagated can be explained by Bayesian updating on truth value.

An approximate Bayesian algorithm can temporarily get stuck in local minima like that. Remember also that the underlying criterion for updating is not truth, but reward maximization. It just happens to be the case that truth is extremely useful for reward maximization. Evolution did not achieve to structure our species in a way that makes it make it obvious for us how to balance social, aesthetic, …, near-term, long-term rewards to get a really good overall policy in our modern lives (or really in any human life beyond multiplying our genes in groups of people in the wilderness). Because of this people get stuck all the time in conformity, envy, fear, etc., when there are actually ways of suppressing ancient reflexes and emotions to achieve much higher levels of overall and lasting happiness.

Let's taboo "identical".

In the limit of time and information, natural selection, memetic propagation, and Bayesian inference all converge on the same result. (Probably(?))

In reality, in observable timeframes, given realistic conditions, neither natural selection nor memetic propagation will converge on Bayesian inference; if you try to model evolution or memetic propagation with Bayesian inference, you will usually be badly wrong, and sometimes catastrophically so; if you expect to be able to extract something like a Bayes score by observing the movement of a meme or gene through a population, the numbers you extract will be badly inaccurate most of the time.

Both of the above are true. I think you are saying the first one, while I am focusing on the second one. Do you agree? If so, our disagreement is a boring semantic one.

the numbers you extract will be badly inaccurate most of the time

As its the case with an myopic view on any Bayesian inference process that involves a lot of noise. The question is just whether rationality is about removing the noise, or whether it is about something else; whether "rationality is more than ‘Bayesian updating’". I do not think we can answer this question very satisfyingly yet.

I tend to think what Cumming says is akin to saying something like: "Optimal evolution is not about adapting according to Bayes rule, because look at just how complicated gene expression is! See, evolution works by stories encoded in G, A, C and T, and most of them get passed on even though they do not immediately help the individual!"

This is a particularly instructive article, worth in-depth study. Thanks for posting it.

Well I did get it from Reddit SSC, tbh I was surprised that it wasn't here already.

Anyway, appreciated.

Thanks for noting that, I found some more interesting discussion there (Linked for others' convenience).

I am uneasy about this link being here because Brexit was politics. I am not removing it yet.

Please keep it. Politics is the mind killer mostly comes into play when debating which side is morally right, not when trying to figure out why one side won.

It comes into play quite a bit when talking about why one side won as well, since you keep seeing people say, "We didn't win because we weren't faithful enough to our principles," when it is obvious from the beginning that political parties will tend to lose because they are too faithful to their principles, i.e. not centrist enough.

That may be "obvious from the beginning" but it's far from clear to me that it's correct. Here are some reasons why.

  • The appeal of a political party to a given voter is not simply a matter of computing some measure of similarity between its principles and the voter's. Some of the other things it depends on -- e.g., how trustworthy the party's people seem, whether the party succeeds in arousing enthusiasm rather than mere consent, whether the party's statements are clear and vivid enough to get through to the voter -- mat well favour more extreme positions.
  • To win elections, a party must not only get voters on side but also get them out of their houses and into the polling stations on election day. Again, this is a matter of enthusiasm as well as consent, and may favour more extreme positions.
  • In many countries, political success is not a matter of a simple nation-wide majority vote. There are constituencies and electoral colleges and the like. This means that political success may depend on identifying particular spatially-correlated groups of people and appealing to them, and there is no guarantee that this looks anything like appealing to the nationwide median voter.
  • When there are more than two candidates, or more than two parties, you can win by appealing to a reasonably-sized minority, and their preferences may be some way away from the "centre".
  • When multiple issues are at play, you can't just arrange parties on a linear scale and ask where the centre is. Aiming for the centre on every issue may result in every voter finding your party mediocre and preferring another party that's extreme according to their highest-priority issue, and success may depend on finding a bunch of specific issues and adopting specific (perhaps "extreme") positions on them.
    • It may be worth noting that one of Cummings's claims is exactly that looking at everything on a left/right axis and aiming for the centre is a big mistake and misunderstands what issues people are actually concerned about, and that many positions widely regarded as "extreme" in very different directions actually coexist in the minds of a large fraction of the electorate.

To test this, go to a hyper-partisan news service that holds political views you disagree with but which also is trying to appeal to high IQ people. (The Weekly Standard if you are on the left would work.) You will find the website's policy analysis difficult to take, but will probably agree with, or at least find reasonable its analysis of why one side won or lost a particular political battle.

I feel that there is more than enough rationality-specific content that this link is appropriate. He talks about Superforecasters!

I am extremely uneasy about that being your basis for moderation if you act. The fine article is explicitly about applying rationalist knowledge to effect real world change. If that is not on topic, what are we doing here? Internet philosophy hour?

Genuine question: Did the Apolitical Guideline become an Apolitical Rule? Or have I always been mistaken about it being a guideline?

Always a guideline. I am still uneasy about the link being here, and would prefer to make it clear, rather than be silent.

Thanks for clarifying. It was easy for me to forget that as well as being a moderator, you're also just another user with a stake in what happens to LW.

Politics is not a mindkiller: people are usually mindkilled by politics. But if politics is discussed by people who are not mindkilled, why not keeping it?

Found some other interesting blog posts by him: 1 2.

I suspect that in general big mistakes cause defeat much more often than excellent moves cause victory. There are some theoretical reasons to suspect this is true from recent statistical analysis of human and computer decisions in chess.

I wonder how much of life outcome (after accounting for genetics and your parents' wealth) is determined by your mistakes.

I wonder how much of life outcome (after accounting for genetics and your parents' wealth) is determined by your mistakes.

Life is horribly imbalanced; small mistakes can cause insanely disproportional damage. It takes literally a few seconds of time and really bad luck to get killed or injured forever. I know a few people who have serious health problems originating with "when I was a small kid, I was doing [a prefectly innocent activity all kids do all the time], and at some moment I fell down and something broke, and at first everyone thought it would heal okay, but since then at random moments I keep feeling horrible pain in [a body part], and it's been like this for decades, and doctors have no idea how to fix it properly". Or, while it can take only a few minutes to get insights like "eating healthy food and exercising regularly should become one of my top priorities, because it makes life longer and more pleasant", you still have to take everyday actions for months and years to actually achieve this. And then, one unlucky fall may break your spine, and you may end in a wheelchair forever.

One moment of depression is enough to commit suicide, but years of health care cannot cure cancer. Signing one bad contract can cost you a lot of money. It is easy to damage property, but more difficult to fix it. Etc. Even speaking of rationality, good ideas typically additionally require a lot of work, but bad ideas can ruin your life in a few minutes easily.

Sometimes there are opposite situations, for example one could spend years in an abusive relationship, and then end it in an afternoon. Or it may take a while to apply for a great job that requires skills you already happen to have. Making a good friend can significantly improve your life afterwards. -- But it still feels like these are rare exceptions, while the opportunities to ruin your life are there all the time, we just usually avoid them.

It could be interesting to look at one's own life, and try to classify things that had nontrivial impact, by two criteria: "good decision" vs "bad decision", and "one-time decision" vs "repeated decision". But there is a problem that "mistakes we didn't make" are quite invisible. For example, it would be easy to forget things like "not doing crime" or "not taking drugs" in the list of good repeated decisions, but it probably has a big impact. I am not making this list right now, because it would take too much time, but maybe I will do it later privately.

Related: Debiasing as Non-Self-Destruction

It seems to me that how to be smart varies widely between professions. (...) Yet such concepts as "be willing to admit you lost", or "policy debates should not appear one-sided", or "plan to overcome your flaws instead of just confessing them", seem like they could apply to many professions. And all this advice is not so much about how to be extraordinarily clever, as, rather, how to not be stupid. Each profession has its own way to be clever, but their ways of not being stupid have much more in common. And while victors may prefer to attribute victory to their own virtue, my small knowledge of history suggests that far more battles have been lost by stupidity than won by genius.

Debiasing is mostly not about how to be extraordinarily clever, but about how to not be stupid. Its great successes are disasters that do not materialize, defeats that never happen, mistakes that no one sees because they are not made. Often you can't even be sure that something would have gone wrong if you had not tried to debias yourself. You don't always see the bullet that doesn't hit you.

The great victories of debiasing are exactly the lottery tickets we didn't buy - the hopes and dreams we kept in the real world, instead of diverting them into infinitesimal probabilities. The triumphs of debiasing are cults not joined; optimistic assumptions rejected during planning; time not wasted on blind alleys. It is the art of non-self-destruction. Admittedly, none of this is spectacular enough to make the evening news.

An awesome reminder, thanks.

We had some technical problems with this linkpost, for some reason it started changing the link to link to itself instead of the article.

Please feel free to re-comment.