Cross-posted, as always, from Putanumonit.


Rats v. Plague

The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.

We started discussing the virus and raising the alarm in private back in January. By late February, as American health officials were almost unanimously downplaying the threat, we wrote posts on taking the disease seriously, buying masks, and preparing for quarantine.

Throughout March, the CDC was telling people not to wear masks and not to get tested unless displaying symptoms. At the same time, Rationalists were already covering every relevant angle, from asymptomatic transmission to the effect of viral load, to the credibility of the CDC itself. As despair and confusion reigned everywhere into the summer, Rationalists built online dashboards modeling nationwide responses and personal activity risk to let both governments and individuals make informed decisions.

This remarkable success did not go unnoticed. Before he threatened to doxx Scott Alexander and triggered a shitstorm, New York Times reporter Cade Metz interviewed me and other Rationalists mostly about how we were ahead of the curve on COVID and what others can learn from us. I told him that Rationality has a simple message: “people can use explicit reason to figure things out, but they rarely do”

If rationalists led the way in covering COVID-19, Vox brought up the rear

Rationalists have been working to promote the application of explicit reason, to “raise the sanity waterline” as it were, but with limited success. I wrote recently about success stories of rationalist improvement but I don’t think it inspired a rush to LessWrong. This post is in a way a response to my previous one. It’s about the obstacles preventing people from training and succeeding in the use of explicit reason, impediments I faced myself and saw others stumble over or turn back from. This post is a lot less sanguine about the sanity waterline’s prospects.

The Path

I recently chatted with Spencer Greenberg about teaching rationality. Spencer regularly publishes articles like 7 questions for deciding whether to trust your gut or 3 types of binary thinking you fall for. Reading him, you’d think that the main obstacle to pure reason ruling the land is lack of intellectual listicles on ways to overcome bias.

But we’ve been developing written and in-person curricula for improving your ability to reason for more than a decade. Spencer’s work is contributing to those curricula, an important task. And yet, I don’t think that people’s main failure point is in procuring educational material.

I think that people don’t want to use explicit reason. And if they want to, they fail. And if they start succeeding, they’re punished. And if they push on, they get scared. And if they gather their courage, they hurt themselves. And if they make it to the other side, their lives enriched and empowered by reason, they will forget the hard path they walked and will wonder incredulously why everyone else doesn’t try using reason for themselves.

This post is about that hard path.

The map is not the territory

 

Alternatives to Reason

What do I mean by explicit reason? I don’t refer merely to “System 2”, the brain’s slow, sequential, analytical, fully conscious, and effortful mode of cognition. I refer to the informed application of this type of thinking. Gathering data with real effort to find out, crunching the numbers with a grasp of the math, modeling the world with testable predictions, reflection on your thinking with an awareness of biases. Reason requires good inputs and a lot of effort.

The two main alternatives to explicit reason are intuition and social cognition.

Intuition, sometimes referred to as “System 1”, is the way your brain produces fast and automatic answers that you can’t explain. It’s how you catch a ball in flight, or get a person’s “vibe”. It’s how you tell at a glance the average length of the lines in the picture below but not the sum of their lengths. It’s what makes you fall for the laundry list of heuristics and biases that were the focus of LessWrong Rationality in the early days. Our intuition is shaped mostly by evolution and early childhood experiences.

Social cognition is the set of ideas, beliefs, and behaviors we employ to fit into, gain status in, or signal to groups of people. It’s often intuitive, but it also makes you ignore your intuition about line lengths and follow the crowd in conformity experiments. It’s often unconscious — the memes a person believes (or believes that they believe) for political expediency often just seem unquestionably true from the inside, even as they change and flow with the tides of group opinion.

Social cognition has been the main focus of Rationality in recent years, especially since the publication of The Elephant in the Brain. Social cognition is shaped by the people around you, the media you consume (especially when consumed with other people), the prevailing norms.

Rationalists got COVID right by using explicit reason. We thought probabilistically, and so took the pandemic seriously when it was merely possible, not yet certain. We did the math on exponential growth. We read research papers ourselves, trusting that science is a matter of legible knowledge and not the secret language of elevated experts in lab coats. We noticed that what is fashionable to say about COVID doesn’t track well with what is useful to model and predict COVID.

On February 28th, famous nudger Cass Sunstein told everyone that the reason they’re “more scared about COVID than they have any reason to be” is the cognitive bias of probability neglect. He talked at length about university experiments with electric shocks and gambles, but neglected to calculate any actual probabilities regarding COVID.

While Sunstein was talking about the failures of intuition, he failed entirely due to social cognition. When the article was written, prepping for COVID was associated with low-status China-hating reactionaries. The social role of progressive academics writing in progressive media was to mock them, and the good professor obliged. In February people like Sunstein mocked people for worrying about COVID in general, in March they mocked them for buying masks, in April they mocked them for hydroxychloroquine, in May for going to the beach, in June for not wearing masks. When someone’s view of COVID is shaped mostly by how their tribe mocks the outgroup, that’s social cognition.

Underperformance Swamp

The reason that intuition and social cognition are so commonly relied on is that they often work. Doing simply what feels right is usually good enough in every domain you either trained for (like playing basketball) or evolved for (like recoiling from snakes). Doing what is normal and fashionable among your peers is good enough in every domain your culture has mastered over time (like cooking techniques). It’s certainly good for your own social standing, which is often the main thing you care about.

Explicit rationality outperformed both on COVID because responding to a pandemic in the information age is a very unusual case. It’s novel and complex, long on available data and short on trustworthy analysis, abutting on many spheres of life without being adequately addressed by any one of them. In most other areas reason does not have such an inherent advantage.

Many Rationalists have a background in one of the few other domains where explicit reason outperforms, such as engineering or the exact sciences. This gives them some training in its application, training that most people lack. Schools keep talking about imparting “critical thinking skills” to all students but can scarcely point to much success. One wonders if they’re really motivated to try — will a teacher really have an easier time with 30 individual critical thinkers rather than a class of password-memorizers?

Then there’s the fact that most people engaged enough to answer a LessWrong survey score in the top percentile on IQ tests and the SAT. Quibble as you may with those tests, insofar as they measure anything at all they measure the ability to solve problems using explicit reason. And that ability varies very widely among people.

And so most people who are newly inspired to solve their problems with explicit reason fail. Doubly so since most problems people are motivated to solve are complicated and intractable to System 2 alone: making friends, losing weight, building careers, improving mental health, getting laid. And so the first step on the path to rationality is dealing with rationality’s initial failure to outperform the alternatives.

Watch out for the alligators of rationalization camouflaged as logs of rationality

Sinkholes of Sneer

Whether someone gives up after their initial failure or perseveres to try again depends on many factors: their personality, context, social encouragement or discouragement. And society tends to be discouraging of people trying to reason things out for themselves.

As Zvi wrote, applying reason to a problem, even a simple thing such as doing more of what is already working, is an implicit accusation against everyone who didn’t try it. The mere attempt implies that you think those around you were too dumb to see a solution that required no gifts or revelations from higher authority, but mere thought.

The loudest sneers of discouragement come from those who tried reason for themselves, and failed, and gave up, and declared publicly that “reason” is a futile pursuit. Anyone who succeeds where they failed indicts not merely their intelligence but their courage.

Many years ago, Eliezer wrote about trying the Shangri-La diet, a strange method based on a novel theory of metabolic “set points” and flavor-calorie dissociation. Many previous casualties of fad diets scoffed at this attempt not because they spotted a clear flaw in the Shangri-La theory, but at Eliezer’s mere hubris at trying to outsmart dieting and lose weight without applying willpower.

Oh, you think you’re so much smarter? Well let me tell you…

A person who is just starting (and mostly failing) to apply explicit reason doesn’t have confidence in their ability, and is very vulnerable to social pressure. The are likely to persevere only in a “safe space” where attempting rationality is strongly endorsed and everything else is devalued. In most normal communities the social pressure against it is simply too strong.

This is I think is the main purpose of LessWrong and the Rationalist community, and similar clubs throughout history and around the world. To outsiders it looks like a bunch of aspie nerds who severely undervalue tact, tradition, intuition, and politeness, building an awkward and exclusionary “ask culture“. They’re not entirely wrong. These norms are too skewed in favor of explicit reason to be ideal, and mature rationalists eventually shift to more “normie” norms with their friends. But the nerd norms are just skewed enough to push the aspiring rationalist to practice the craft of explicit reason, like a martial arts dojo.

Strange Status and Scary Memes

But not all is smooth sailing in the dojo, and the young rationalist must navigate strange status hierarchies and bewildering memeplexes. I’ve seen many people bounce off the Rationalist community over those two things.

On the status front, the rightful caliph of rationalists is Eliezer Yudkowsky, widely perceived outside the community to be brash, arrogant, and lacking charisma. Despite the fact of his caliphdom, arguing publicly with Eliezer is one of highest-status things a rationalist can do, while merely citing him as an authority is disrespected.

People like Scott Alexander or Gwern Branwen are likewise admired despite many people not even knowing what they look like. Attributes that form the basis of many status hierarchies are heavily discounted: wealth, social grace, credentials, beauty, number of personal friends, physical shape, humor, adherence to a particular ideology. Instead, respect often flows from disreputable hobbies such as blogging.

I think that people often don’t realize that their discomfort with rationalists comes down to this. Every person cares deeply and instinctively about respect and their standing in a community. They are distressed by status hierarchies they don’t know how to navigate.

And I’m not even mentioning the strange sexual dynamics

And if that wasn’t enough, rationalists believe some really strange things. The sentence “AI may kill all humans in the next decade, but we could live forever if we outsmart it — or freeze our brains” is enough to send most people packing.

But even less outlandish ideas cause trouble. The creator of rationality’s most famous infohazard observed that any idea can be an infohazard to someone who derives utility or status from lying about it. Any idea can be hazardous to to someone who lacks a solid epistemology to integrate it with.

In June a young woman filled out my hangout form, curious to learn more about rationality. She’s bright, scrupulously honest, and takes ideas very seriously, motivated to figure out how the world really works so that she can make it better. We spent hours and hours discussing every topic under the sun. I really liked her, and saw much to admire.

And then, three months later, she told me that she doesn’t want to spend time with me or any rationalists anymore because she picked up from us beliefs that cause her serious distress and anxiety.

This made me very sad also perplexed, since the specific ideas she mentioned seem quite benign to me. One is that IQ is real, in the sense that people differ in cognitive potential in a way that is hard to change as adults and that affects their potential to succeed in certain fields.

Another is that most discourse in politics and the culture war can be better understood as signaling, a way for people to gain acceptance and status in various tribes, than as behavior directly driven by an ideology. Hypocrisy is not an unusually damning charge, but the human default.

To me, these beliefs are entirely compatible with a normal life, a normal job, a wife, two guinea pigs, and many non-rationalist friends. At most, they make me stay away from pursuing cutting-edge academic mathematics (since I’m not smart enough) and from engaging political flame wars on Facebook (since I’m smart enough). Most rationalist believe these to some extent, and we don’t find it particularly remarkable.

But my friend found these ideas destabilizing to her self-esteem, her conception of her friends and communities, even her basic values. It’s as if they knocked out the ideological scaffolding of her personal life and replaced it with something strange and unreliable and ominous. I worried that my friend shot right past the long path of rationality and into the valley of disintegration.

Valley of Disintegration

It has been observed that some young people appear to get worse at living and at thinking straight soon after learning about rationality, biases, etc. We call it the valley of bad rationality.

I think that the root cause of this downturn is people losing touch entirely with their intuition and social cognition, replaced by trying to make or justify every single decision with explicit reasoning. This may come from being overconfident in one’s reasoning ability after a few early successes, or by anger at all the unreasoned dogma and superstition one has to unlearn.

A common symptom of the valley are bucket errors, when beliefs that don’t necessarily imply one another are entangled together. Bucket errors can cause extreme distress or make you flinch away from entire topics to protect yourself. I think this may have happened to my young friend.

My friend valued her job, and her politically progressive friends, and people in general, and making the world a better place. These may have become entangled, for example by thinking that she values her friends because their political activism is rapidly improving the world, or that she cares about people in general because they each have the potential to save the planet if they worked hard. Coming face to face with the ideas of innate ability and politics-as-signaling while holding on to these bucket errors could have resulted in a sense that her job is useless, that most people are useless, and that her friends are evil. Since those things are unthinkable, she flinched away.

Of course, one can find good explicit reasons to work hard at your job, socialize with your friends, and value each human as an individual, reasons that have little to do with grand scale world-improvement. But while this is useful to think about, it often just ends up pushing bucket errors into other dark corners of your epistemology.

People just like their friends. It simply feels right. It’s what everyone does. The way out of the valley is to not to reject this impulse for lack of journal citations but to integrate your deep and sophisticated friend-liking mental machinery with your explicit rationality and everything else.

Don’t lose your head in the valley

The way to progress in rationality is not to use explicit reason to brute-force every problem but to use it to integrate all of your mental faculties: intuition, social cognition, language sense, embodied cognition, trusted authorities, visual processing… The place to start is with the ways of thinking that served you well before you stumbled onto a rationalist blog or some other gateway into a method and community of explicit reasoners.

This idea commonly goes by metarationality, although it’s certainly present in the original Sequences as well. It’s a good description for what the Center for Applied Rationality teaches — here’s an excellent post by one of CFAR’s founders about the valley and the (meta)rational way out.

Metarationality is a topic for more than two paragraphs, perhaps for an entire lifetime. I have risen out of the valley — my life is demonstrably better than before I discovered LessWrong — and the metarationalist climb is the path I see ahead of me.

And behind me, I see all of this.

So what to make of this tortuous path? If you’re reading this you are quite likely already on it, trying to figure out how to figure things out and dealing with the obstacles and frustrations. If you’re set on the goal that this post may offer some advice to help you on your way: try again after the early failures, ignore the sneers, find a community with good norms, and don’t let the memes scare you — it all adds up to normalcy in the end. Let reason be the instrument that sharpens your other instruments, not the only tool in your arsenal.

But the difficulty of the way is mostly one of motivation, not lack of instruction. Someone not inspired to rationality won’t become so by reading about the discouragement along the way.

And that’s OK.

People’s distaste for explicit reason is not a modern invention, and yet our species is doing OK and getting along. If the average person uses explicit reason only 1% of the time, the metarationalist learns that she may up that number to 3% or 5%, not 90%. Rationality doesn’t make one a member of a different species, or superior at all tasks.

The rationalists pwned COVID, and this may certainly inspire a few people to join the tribe. As for everyone else, it’s fine if this success merely raises our public stature a tiny bit, lets people see that weirdos obsessed with explicit reason have something to contribute. Hopefully it will make folk slightly more likely to listen to the next nerd trying to tell them something using words like “likelihood ratio” and “countersignaling”.

Because if you think that COVID was really scary and our society dealt with it really poorly — boy, have we got some more things to tell you.

The Treacherous Path to Rationality
New Comment
116 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Seriously, in what sense did rationalists "pwn covid"? Did they build businesses that could reliably survive a year of person-to-person contact being restricted across the planet? Did they successfully lobby governments to invest properly in pandemic response before anything happened? Did they invest in coronavirus vaccine research so that we had a vaccine ready before the pandemic started? Did they start a massive public information campaign that changed people's behaviour and stopped the disease from spreading exponentially? Did they all move to an island nation where they could continue life as normal by shutting the border? 

Honestly, it seems pretty distasteful to say that anyone 'pwned' a disease that has now killed over 1 million people, but on the face of it, it's also pretty ridiculous. So far as I can tell, a small handful of people divested a small amount of their stock portfolio, and a bunch of people wrote some articles about how the disease was likely to be a big deal, mostly around the time other people were also starting to come to the same conclusion. By late February it was probably already too late to start stockpiling for a quarantine without effectively tak... (read more)

FWIW I left a decent job that required regular air travel to deep red "COVID is a liberal hoax" areas of the US based heavily on content here.  I had alternatives lined up but I probably would've stuck it out otherwise and I think that would've been a mistake. 

6Ben Pace
Thanks for sharing this info. It's helpful for writers in the community to hear about these sorts of effects their writing has :)

We didn't get COVID, for starters. I live in NYC, where approximately 25% of the population got sick but no rationalists that I'm aware of did.

I'm actually confused by that response, and I don't think it's really part of your best attempt to explain what you meant by 'rationalists pwned covid'. I'll try to explain why I'm unimpressed with that response below, but I think we're in danger of getting into a sort of 'point-scoring' talking past each other. Obviously there were a few rhetorical flourishes in my original response, but I think the biggest part of what I'm trying to say is that the actual personal benefits to most people of being ahead of the curve on thinking about the pandemic were pretty minimal, and I think avoiding infection would fall in that 'minimal benefit' bucket for most of us. 

I think we can be a bit more concrete - I think the actual personal benefits to either you or me of being aware of what was happening with COVID slightly before everyone else were pretty minimal. I really liked your article from February, and I really think the points you were making about conformity bias are probably the strongest part of your argument that rationality has practical uses, but you pretty much said yourself in that post that the actual, practical benefits were not that big: 

"Aside from selling the equit... (read more)

Third, as I said above, it’s a pretty low bar. If you’re rich enough (and don’t work at a hospital), avoiding personally getting infected is relatively straightforward, and while obviously it has some benefits, I don’t think it would be enough of an incentive to convince me to take on a whole new worldview.

My personal experience is consistent with this take, for what it’s worth. I think that “rationalists didn’t get COVID” is indeed mostly due to substantially higher average income (perhaps not even among ‘rationalists’ but specifically among Jacob’s friends/acquaintances).

3Ben Pace
Something about that seems plausible to me. I'll think on it more...

The best startup people were similarly early, and I respect them a lot for that. If you know of another community or person that publicly said the straightforward and true things in public back in February, I am interested to know who they are and what other surprising claims they make.

I do know a lot of rationalists who put together solid projects and have done some fairly useful things in response to the pandemic – like epidemicforecasting.org and microcovid.org, and Zvi's and Sarah C's writing, and the LW covid links database, and I heard that Median group did a bunch of useful things, and so on. Your comment makes me think I should make a full list somewhere to highlight the work they've all done, even if they weren't successful.

I wouldn't myself say we've pwned covid, I'd say some longer and more complicated thing by default that points to our many flaws while highlighting our strengths. I do think our collective epistemic process was superior to that of most other communities, in that we spoke about it plainly (simulacra level 1) in public in January/February, and many of us worked on relevant projects.

I didn't really see much public discussion early outside of epidemiology Twitter. I'm married to an epidemiologist who stocked our flat with masks in December when there were 59 confirmed cases in Wuhan, and we bought enough tins of food to eat for a few weeks in January, as well as upgrading our work-from-home set up before things sold out. Although I completely failed to make the connection and move my pension out of equities, that doesn't actually seem to have cost me very much in the long run (for those keeping count, S&P 500 is up 17% year-on-year).

(The biggest very early warning sign, apparently, was that when there were 59 cases China was still claiming there was no person-to-person transmission, which seemed implausible). 

I actually am impressed by how well the Lesswrong-sphere did epistemically. People here seem to have been taking COVID seriously before most other people were, but as I've tried to explain a bit more above, I'm not sure how much this good this did anyone personally. If the argument is 'listen to rationalists when they say weird things because they're more often right when they say weird things that most other people', then I think I'm on board. If the argument is 'try explicit rationality, it will make your life noticeably better in measurable ways', then I'm less convinced, and I think these really are distinct claims.

PS - your links seem to be broken, it's easy enough to follow them, as you gave full URL's, just thought I'd let you know.

Good on your spouse! Very impressed. 

(Also, I don't get the S&P being up so much, am generally pretty confused by that, and updated further that I don't know how to get information out of the stock market.)

I think epistemics is indeed the first metric I care about for LessWrongers. If we had ignored covid or been confident it was not a big deal, I would now feel pretty doomy about us, but I do think we did indeed do quite well on it. I could talk about how we discussed masks, precautions, microcovids, long-lasting respiratory issues, and so on, but I don't feel like going on at length about it right now. Thanks for saying what you said there.

Now, I don't think you/others should update on this a ton, and perhaps we can do a survey to check, but my suspicion is that LWers and Rationalists have gotten covid way, way less than the baseline. Like, maybe an order of magnitude less. I know family who got it, I know whole other communities who got it, but I know hundreds of rationalists and I know so few cases among them.

Of my extended circle of rationalist friends, I know of one person who got it, and this was due to them living in a different community with different epistemic s... (read more)

3Sherrinford
Hey Ben, given that you are able to keep track of 100s of friends and acquaintances, and assuming that you also have lots of other friends and acquaintances who are not rationalists but similar in other respects (probably: young; high income and education; jobs that can be transformed to remote jobs if they aren't already; not too uncomfortable with staying at home because they do not spend every weekend in a soccer stadium or dancing all night?): How large do you estimate the differential impact of "being rationalist" to be?
8Ben Pace
It's a good question. I'll see if I can write a reply in the next few days...

there are two sides to an options contract, when to buy and when to sell. Wei Dai did well on the first half but updated in the comments on losing most of the gains on the second half. This isn't a criticism, it's hard.

4Sherrinford
Is "some of us" more than Wei Dai? Because it seems to me that only Wei Dai is mentioned as an example but it is implied that more people profited - not only by you, but in general when I see that claim.
6Ben Pace
(I know of 1-2 other examples where people did something like double their net wealth.)
6Linch
I know someone who ~5x'd. 

I'd stress the idea here that finding a "solution" to the pandemic is easy and preventing it early on based on evidence also is.

Most people could implement a solution better than those currently affecting the US and Europe, if they were a global tsar with infinite power.

But solving the coordination problems involved in implementing that solution is hard, that's the part that nerds solving and nobody is closer to a solution there.

3voxelate
I saw the supply chain disruptions coming and made final preparations for it, I saw layoffs coming in my aviation-related job so I updated my resume, took a good severance package, and found a new, remote-based job with significantly higher pay.  And yes, I also significantly re-balanced my portfolio and took advantage of the crash early this year.  In all, I expect about 40% additional income/unrealized gains this year than last.  To me that's more than minimal. Rationalists that were paying attention get the 1st chance of understanding the implications and making moves (big or small) before a mass of people finally took it seriously in the US.  I'll admit part of it is certainly luck since I can't really time the market or precisely know how gov't policies and actions will affect my stocks.   It's also hard to know how much of that on my part was explicit reason, I was certainly reading up on the literature about it, but there was not a ton of data.  I did use some social cognition based on the Chinese response under the presumption that they knew more about it since it originated there. I don't think the COVID response is even the best measure to judge the benefits of being rational, it's just one part of it.  If you want to solve problems, you have to be rational... being irrational is a bad way to solve problems.  
0leonidasmith
Thank you for this comment.  I have very mixed feelings about this website. On the one hand, it's got interesting articles and reading HPMOR was very entertaining. But, on the other, there's so many people who are just writing posts that are embarrassingly the opposite of what they claim they are all about: self-consciousness, in particular.  Me and my friends are rational, that is, all that is right and correct, by definition, and we call ourselves Rationalists. Can't you see the paradox in this claim? And yet I thought people here didn't find Spock rational, and correctly assessed that his character is "defined" as such by the story, but it fails to fulfill itself. As a human being you will always fail. The website is called "less wrong", for a random divinity's sake. It should be about striving to be less wrong while admitting we cannot avoid failure, not about jerking each other off about how rational and better we are than others. And yet...  In general I am highly suspicious of any lover of truth who will willingly call themselves a sophist: for a Rationalist, there is little praise as high as being a Rationalist, and therefore calling yourself and the people who agree with you with this name is very much patting your own back. Especially when using to criticize and compare yourself to people you disagree with. Having said this, there are interesting things in this post too. I'm not saying it's completely bad, but a lot of the language and framing here leads me to think there is also a lot of unnecessary arrogance. I just wanna say, more actual thinking, less implicit self-praise.

I think this post is doing a simplification which is common in our community, and at some point we need to acknowledge the missing nuance there. The implicit assumption is that rationality is obviously always much better than believing whatever is socially expedient, and everyone who reject rationality are just doing a foolish error. In truth, there are reasons we evolved to believe whatever is socially expedient[1], and these reasons are still relevant today. Specifically, this is a mechanism for facilitating cooperation (which IMO can be given a rational, game-theoretic explanation). Moreover, it seems likely that for most people, during most of history, this strategy was the right choice.

IMO there are two major reasons why in these times rationality is the superior strategy, at least for the type of people drawn to LessWrong and in some parts of the world. First, the stakes are enormous. The freedom we enjoy in the developed world, and the pace of technological progress create many opportunities for large gains, from founding startups to literally saving the world from destruction. Given such stakes, the returns on better reasoning are large. Second, we can afford the cost. Beca... (read more)

IMO there are two major reasons why in these times rationality is the superior strategy, at least for the type of people drawn to LessWrong and in some parts of the world.

A third reason is that believing in whatever is socially expedient works much better when the socially expedient beliefs have been selected to be generally adaptive. The hunter-gatherer environment didn't change much and culture had plenty of time to be selected for generally beneficial beliefs, but that's not the case for today's beliefs:

The trouble with our world is that it is changing. Henrich focuses on small scale societies. These societies are not static. The changes they undergo are often drastic. But the distance between the life-style of a forager today and that of her ancestors five hundred years ago pales next to the gap that yawns between the average city-slicker and her ancestors five centuries past. Consider the implications of what demographers call the "demographic transition model:"

Each stage in the model presents a different sort of society than that which came before it. Very basic social and economic questions—including subsistence strategy, family type, mechanisms for mate selection, a

... (read more)
9Ben Pace
Upvoted, it's also correct to ask whether taking this route is 'worth it'. I am skeptical of "Moreover, it seems likely that for most people, during most of history, this strategy was the right choice." Remember that half of all humans existed after 1309. In 1561 Francis Bacon was born, who invented the founding philosophy and infrastructure of science. So already it was incredibly valuable to restructure your mind to track reality and take directed global-scale long-term action.  And plausibly it was so before then as well. I remember being surprised reading Vaniver's account of Xunzi's in 300 BC, where Vaniver said:
5Vanessa Kosoy
Francis Bacon's father was a successful politician and a knight. Bacon was born into an extremely privileged position in the world, and wasn't typical by any margin. Moreover, ey were, quoting Wikipedia, a "devout Anglican", so ey only went that far in eir rationality.

If I, a rationalist atheist, was in Francis Bacon's shoes I would 100% live my life in such a way that history books would record me as being a "devout Anglican". 

4Vanessa Kosoy
Sure. But, in order to lie without the risk of being caught, you need to simulate the person who actually is a devout Anglican. And the easiest way to do that is, having your conscious self actually be a devout Anglican. Which can be a rational strategy, but which isn't the thing we call "rationality" in this context. Another thing is, we can speak of two levels of rationality: "individual" and "collective". In individual rationality, our conscious beliefs are accurate but we keep them secret from others. In collective rationality, we have a community of people with accurate conscious beliefs who communicate them with each other. The social cost of collective rationality is greater, but the potential benefits are also greater, as they are compounded through collective truth-seeking and cooperation.
7Ben Pace
This isn't much of an update to me. It's like if you told me that a hacker broke out of the simulation, and I responded that it isn't that surprising they did because they went to Harvard. The fact that someone did it all is the primary and massive update that it was feasible and that this level of win was attainable for humans at that time if they were smart and determined.
9Vanessa Kosoy
We're discussing the question of whether for most people in the past, rationality was a strategy inferior to having a domain where conscious beliefs are socially expedient rather than accurate. You gave Francis Bacon as a counterexample. I pointed out that, first, Bacon was atypical along the very axes that I claim make rationality the superior choice today (having more opportunities and depending less on others). This weakens Bacon's example as evidence against my overall thesis. Second, Bacon actually did maintain socially expedient beliefs (religion, although I'm sure it's not the only one). There is a spectrum between average-Jane-strategy and "maximal" self-honesty, and Bacon certainly did not go all the way towards maximal self-honesty.
4Ben Pace
I think the thing I want here is a better analysis of the tradeoff and when to take it (according to one's inside view), rather than something like an outside view account that says "probably don't".  (And you are indeed contributing to understanding that tradeoff, your first comment indeed gives two major reasons, but it still feels to me true to say about many people in history and not just people today.) Suppose we plot "All people alive" on the x-axis, and "Probability you should do rationality on your inside view" on the y-axis. Here are two opinions one could have about people during the time of Bacon. "Some people should do rationality, and most people shouldn't.""Some people should not think about it, and some people should consider it regularly and sometimes do it." I want to express something more like the second one than the first.
5Vladimir_Nesov
Instead of doing a better thing, one might do the more integrity-signaling thing, or pursue scholarship, or maximize personal wealth. Expecting an assumption about what's better relies on the framing of human pursuits as better-things-seeking.
4Vanessa Kosoy
By "better" I mean "better in terms of the preferences of the individual" (however, we also constantly self-deceive about what our preferences actually are).
3Vladimir_Nesov
But if a person pursues something for reasons other than considering it the better thing, then the concept of "better" is useless for explaining their behavior. It might help with changing their behavior, if they might come to be motivated by the concept of "better", and form an understanding of what that might be. Before that happens, there is a risk of confusing the current pursuit (revealed preference) with a nascent explicitly conceptualized preference (the concept of "better") that's probably very different and might grow to fill the role of their pursuit if the person decides to change for the better (losing integrity/scholarly zeal/wealth/etc.).
5Vanessa Kosoy
Hmm, I think we might be talking past each other for some reason. IMO people have approximately coherent preferences (that do explain their behavior), but they don't coincide with what we consciously consider "good", mostly because we self-deceive about preferences for game theory reasons.
3Vladimir_Nesov
The distinction between observed behavior (preferences that do explain behavior) and endorsed preference (a construction of reason not necessarily derived from observation of behavior) is actionable. It's not just a matter of terminology (where preference is redefined to be whatever observed behavior seems to seek) or hypocrisy (where endorsed preference is public relations babble not directly involved in determining behavior). Both senses of "preference" can be coherent. But endorsed preference can start getting increasignly involved in determining the purposes of observed behavior, and plotting how this is to happen requires keeping the distinction clear.
3Vanessa Kosoy
I think that the "endorsed" preference mostly affects behavior only because of the need to keep up the pretense. But also, I'm not sure how your claim is related to my original comment?
2Vladimir_Nesov
Humans can be spontaneous (including in the direction of gradual change). It's possible to decide to do an unreasonable thing unrelated to revealed preference or previous activity. Thus the need to keep up the pretense is not a necessary ingredient of the relationship between behavior and endorsed preference. It's possible to start out an engineer, then change behavior to pursuit of musical skill, all the while endorsing (but not effecting) promotion of communism as the most valuable activity. Or else the behavior might have changed to pursuit of promotion of communism. There is no clear recipe to these things, only clear ingredients that shouldn't be mixed up. The statement in the original comment framed pursuit of rationality skills as pursuit of things that are better. This seems to substitute endorsed preference (things that are better) for revealed preference (actual pursuit of rationality skills). As I understand this, it's not necessary to consider an actual pursuit a good thing, but it's also prudent to keep track of what counts as a good thing, as it might one day influence behavior.
2Vanessa Kosoy
IMO going from engineer to musician is not a change of preferences, only a change of the strategy you follow to satisfy those preferences. Therefore, the question is, is rationality a good strategy for satisfying the preferences you are already trying to satisfy.
2Vladimir_Nesov
I would say about a person for whom this is accurate that they didn't really care about engineering, or then music. But there are different people who do care about engineering, or about music. There is a difference between the people who should be described as only changing their strategy, and those who change their purpose. I was referring to the latter, as an example analogous to changing one's revealed preference to one's endorsed preference, without being beholden to any overarching ambient preference satisfied by either.
3Vanessa Kosoy
IMO such "change of purpose" doesn't really exist. Some changes happen with aging, some changes might be caused by drugs or diet, but I don't think conscious reasoning can cause it.

(I don't think outsiders are more leery of the community because of its "ask culture" than of its "talk over people culture". It's something I have a problem with as a meetup organiser. Rats come fully prepared to sweep the floor, regardless of what happens to lie there.)

9Ben Pace
Many of the best of us sure are disagreeable and forthright!

And how good are the best of us at bringing cookies and tea and just putting things on the table before we start disagreeing?

"Tact" is a field you build for the disagreeing to be around the issues that people truly just don't agree about, not a bullet to be shot at an opponent someone wants to be charitable to.

And how good are the best of us at bringing cookies and tea and just putting things on the table before we start disagreeing?

This is one of the things that drove me away from casual in-person “rationalist community” gatherings. My habit when getting together with my friends is to bring some cookies (or something along these lines); my friends usually also contribute something. So the first several times I came to small gatherings of rationalist-type folks, I indeed brought (homemade!) cookies for everyone.

It turned out that (a) I was the only one who ever thought to bring any such thing (even after the first time), and (b) while everyone else was clearly happy to eat the cookies, not only did no one ever thank me for bringing them, but no one even commented on them or acknowledged in any way that I’d brought said cookies.

So, I stopped bringing cookies, and then stopping coming to such gatherings.

9Vladimir_Nesov
I would've ignored the fact of there being cookies, as I wouldn't want to support the norm of bringing cookies (I don't care about there being cookies, and it would be annoying to be expected to bring generalized cookies), but I would've also intentionally avoided eating them (not participating in a norm goes both ways). So the claim that everyone ate the cookies seems surprising. There should be an option to register disapproval of a norm that wouldn't be seen as nonspecific rudeness.

This sounds like a strategic misstep, and I'm guessing it was caused either by a hyperalert status manager in your brain or a bad experience at the hands of a bully (intentional or otherwise) in the past.

I estimate that (prepare for uncharitable phrasing) asking anyone with your mindset to try to self-modify to be okay with other people taking steps to make everyone happier in this way is a smaller cost than a norm of "don't bring [cookies], rationalists will turn around and blame everyone who didn't bring them if you dare".

But yeah I think spending points to teach people not to defect against a bring-cookies-if-you-wanna norm (aka thank them, aka don't look askance at the but-I-don't-wanna) is waaay better than spending points to disallow a bring-cookies-if-you-wanna norm.

try to self-modify to be okay with

I'm okay with other people supporting norms that I don't support, and with following a norm that I don't support, if it happens to be accepted in a group. But there should be freedom to register disapproval of a norm, even when it ends up accepted (let alone in this case, where it apparently wasn't accepted). There is no call to self-modify anyone.

What felt annoying to me and triggered this subthread was that in Said's story there were only people who supported the norm he appeared to be promoting, and people who preyed on the commons. Disapproval of the norm was not a possibility, on pain of being bundled together with the defectors. This issue seems to me more important than the question of which norm is the right one for that setting (that is, which norm should have been supported).

3SarahSrinivasan
That's fair. There are definitely norms I think help overall (or situationally help) that I wish didn't help overall because I don't like them. For example tolerance of late arrivals. I hate it, and also if we didn't tolerate it my most valuable group would never have existed.
2Vladimir_Nesov
That's strategic voting as opposed to voting-as-survey. What if nobody wants cookies, but most people vote for them in expectation that others would appreciate them? Voting-as-survey should be able to sort this out, but strategic voting suffers from confirmation bias. Everyone is bringing cookies, so apparently people like them. But with strategic voting this is begging the question, there might have been no attempt to falsify the assumption. Thus I don't even see how it can be clear whether the cookies norm is better for the group that the no-cookies norm, and so whether the strategic vote should support the cookies. (In the case of cookies specifically, getting eaten is some sort of survey, but in general strategic voting breeds confusion.)
2Said Achmiz
What norm do you think I was (or appeared to be) promoting?
2Vladimir_Nesov
The norm that everyone should occasionally sell some food for status. (I understand that many groups like the activity, in which case it's a good norm. Personally I don't like eating at social gatherings, or food-derived status, or being confused for a defector, so I don't like there being a norm like that.)

There's a clever trick to this effect. You can say thank you for others' sake without eating! Wouldn't that just throw a spanner into their Machiavellian calculations on who owes whom.

8Said Achmiz
You can hardly simultaneously describe the relevant dynamic as “selling food for status” and admit that many people/groups enjoy sharing food at social gatherings; these are mutually inconsistent characterizations. ETA: It goes almost without saying that “sell some food for status” is an unnecessarily tendentious description, all by itself…
8Vladimir_Nesov
Huh? Where is the contradiction? Giving status for things you appreciate is enjoyable, as well as receiving status for a good deed. Not to mention all the delicious food generated by presence of the norm. It's clearly selling because not paying for the food (with a generalized "thank you" and possibly reciprocal participation in the norm) is defection. But there is nothing wrong with a good market!

"Everyone should occasionally sell some food for status" is not what's being discussed. Your phrasing sounds as though Said said everyone was supposed to bring cookies or something, which is obviously not what he said.

What's being discussed is more like "people should be rewarded for making small but costly contributions to the group". Cookies in-and-of-themselves aren't contributing directly to the group members becoming stronger rationalists, but (as well as just being a kind gift) it's a signal that someone is saying "I like this group, and I'm willing to invest basic resources into improving it". 

If such small signals are ignored, it is reasonable to update that people aren't tracking contributions very much, and decide that it's not worth putting in more of your time and effort.

I agree with the more general point about importance of tracking and rewarding contributions, but in this subthread I was specifically discussing cookies and difficulties with graciously expressing my lack of appreciation for them.

rewarded for making small but costly contributions

There is nothing good about contributions being costly. With signaling, the cost should pay for communication of important things that can't otherwise be communicated, because incentives don't allow trust; here that piece of critical intelligence would be posession of cooking skill and caring about the group. The cost is probably less than the cost of time spent in the meeting, so the additional signal is weak. If you like cooking, the cost might actually be negative. If you are not poor, the signal from store-bought food is approximately zero. (As signaling is about a situation without trust, it's not the thought that counts. I'm not saying that signaling is appropriate here, I'm considering the hypothetical where we are engaged in signaling for whatever reason.)

And it should actually matter whether the contributions are appreciated. So I guess it's possible that there is a difference in how people respond to costly signals, compared to useful contributions of indeterminate cost.

5Said Achmiz
I’m sorry, but this is a ridiculous claim.
9Vladimir_Nesov
Once upon a time, I liked programming. Time spent not programming was uncomfortable, and any opportunity to involve programming with other activities was welcome. If I could program some cookies for a meetup, I would describe the cost of that as negative. Thus by analogy I'm guessing that a person who similarly likes cooking would perceive the cost of cooking (not counting the price of ingredients) as negative. Maybe I liked programming to a ridiculous degree?
9ioannes
In folksier terms, what's being discussed is rationalists' often-strange relationship to common courtesy (i.e. Lindy social dynamics).
3Said Achmiz
Just so.
4Mary Chernyshenko
(Not sure where this fits in the thread or if it does, so - sorry for offtop. At least one of ours has contracted the virus, AFAIK. He told me after we have talked for a bit about another business, I asked him to comment on something and he said sure, he'd have done it sooner but for covid... I have offered our local LW people to help pay for testing if anybody needs it, without any additional questions or conclusions. So far nobody has asked for it and I do hope this means something good, like "we're mostly healthy and have money" and not something bad, like "we would have asked for help but it's not done". Even to be able to offer anything meaningfully, I need people "to bring cookies".)
8Said Achmiz
I would’ve hoped that the use of ‘everyone’ in this context would be clearly enough slightly-hyperbolic to avoid this sort of misunderstanding… This happened years ago, and I don’t have perfect recall of past events. Even at the time, I could not assert with confidence that literally every single person present at each of these events ate the cookies. (Indeed, a priori such a claim seems unlikely; surely at least one person was on a diet? Diabetic? Vegan? Lactose-intolerant? Not a fan of oatmeal / chocolate chip / whatever? A claim that literally everyone ate the cookies should be surprising for reasons entirely unrelated to any social norms!) The cookies were eaten—that’s the point. Not long into each gathering, all the cookies (or other sweets; I think I may’ve brought brownies once) were gone. The majority of the other attendees seemed happy to eat them. These things, I can say with as great a confidence as I have in recollection of any other years-past event. As for your main point… I sympathize with being placed in the unpleasant situation of disapproving of a social norm that others are promulgating with good intentions. (Clearly, I disagree with you on the subject of this particular norm; what’s more, it seems to me that you are rather misinterpreting what the intended/desired norm is, in this case. I don’t know if you’d still disapprove of the actual norm I have in mind, properly understood… if so, our disagreement deepens, as I think that rejection of the norm in question, and those like it, is corrosive to any would-be community. But all of this is beside the point.) But there are ways of handling such situations that contribute to social cohesion, and ways that detract from it. In my experience, in most more or less casual social circles (whether they be centered around a workplace, group activity, or anything else), most people have little or no skill at cooking/baking. If one person does have such skill, and (for whatever occasion may warrant it—b
2Vladimir_Nesov
A norm you might've intended is not part of the decision problem, if what people observe is only the cookies not accompanied by an essay detailing the intended norm. I'm still not sure what response you endorse for those who disapprove of what the norm appears to be (other than explicitly engaging in a discussion). I wasn't literal with "everyone" either. The point is that in your recollection you've rounded down to zero the number of people who might've tried to respectfully decline (in the most straightforward way) the norm you appeared to be pushing.
3Said Achmiz
Respectfully, I think you are missing my point in a quite comprehensive way. Perhaps others might weigh in on whether what I have said is clear (and, of course, whether they agree, etc.) I will refrain from further attempts at explanation until then.
0Vladimir_Nesov
Clearly I wasn't engaging your point, I was clarifying my own point instead. So I don't see how it would be evident whether I was missing your point or not. There are these defectors, and for any reasonable person whose reaction to a cookie is to explicitly conceptualize the social consequences of possible responses to being presented with it, it should be clear that silently eating the cookie and not otherwise responding in any way is defection. There are groups where a different response is prevalent, though probably for reasons other than higher propensity for consideration of social consequences of their actions or different results of that consideration. Because of these hypotheses where apparent cooperation follows for obscure reasons, and apparent defection follows from seeing a cookie as just food, I don't see how lack of apparent cooperation leads to any clear conclusions. (As an example of a point I chose not to engage.)
8abramdemski
Wow, that sucks. I predict that my local meetup (in Los Angeles) would have been loudly thankful of your bringing cookies. I don't especially predict that anyone would have reciprocated, though.
8Ben Pace
In as much as my comment matters here, I'm sorry about that Said :/
6Mary Chernyshenko
I'm sorry to hear that. FWIW, I think you should not have brought homemade cookies, it means "something personal". In a way, having good but bought food is easier to cooperate on. I don't mean it was your fault the others took advantage of your generosity (that would be stupid), just that sometimes, we should try the easier way first. I was thinking about storing some generic cookies at our place as a fall-back option and tell people they can bring stuff to have "a more interesting table". And maybe once in a while collect a little money to replenish the stores. If someone new comes in and finds homemade food, its kind of awkward, and new people join in at times.
4Said Achmiz
I tried that, too, as it happens. Would you care to guess what the result was?

Our results were: 1) bought and forgot to eat; 2) bought enough for people who came there from home but too little for people who came from work; 3) bought and ate; 4) forgot to buy; 5) got so hungry we had to stop talking (about food) and send a guy out for sandwiches; 6) bought and saw that someone brought their own food, too, so we had to redistribute the leftovers... I mean, we are just great at this planning thing...

...but if you offer me to guess, I'd wager people said it's really just not worth a bother.

5Said Achmiz
What actually happened was exactly the same as what happened with the homemade cookies: people ate the food, without ever in any way acknowledging that I had brought it (thanking me wasn’t even on the radar); no one else ever brought anything.
1Mary Chernyshenko
Tough...

If you've ever been to a CFAR workshop, you're aware of just how strong rationalists are at bringing forth the greatest snacks known to mankind :)

(Will edit to reply to the other part of your comment in a bit, in a meeting now.)

Thank you, that instills hope. I've got to send c. 75% of them to CFAR and we're golden.

(Sorry about carrying on like this, I'm a bit mad right now. I have just lost a very nice and thoughtful introverted person (who is also an LW follower of several years) from our local online discussion group and nobody even noticed. I went to their discussion group last year, it was a dream come true, although not strictly LW-themed. And now she tried us out and withdrew. We didn't even get to disagree about anything.)

But seriously, a post on Meetup Food sometime before people can meet offline might be a good idea!

I'm sorry you lost the person from joining your discussion group. (PM'd you, I'd be interested to hear more about your group, and chat about how to cause cool people to reliably stick around.)

3Mary Chernyshenko
Oh, thank you a lot, that'd be lovely!

Much of this thread is long time rationalists talking about the experience of new people like me. Here's my experience as someone who found rationality a year ago. It bears more closely on the question than the comments of outliers. I read the sequences then applied rat ideas to dating, and my experience closely resembles Jacobians model. Note that LW has little dating advice, so I did the research and application myself. I couldn't just borrow techniques, had to apply rationality[^1]. My experience is evidence that rationality is improving our outcomes.

I picked up The Sequences in February 2020 on a recommendation from 80k. I read the Yud's sequences cover to cover. Their value was immediately obvious to me, and I read deeply.

I finished the sequences in May, and immediately started applying it to my problems. My goal was not to look cool or gain status on a weird blog. I just wanted to make my life better, and The Sequences gave me a sense that more was possible.

Improving my romantic life has been my greatest rationality project. Dating was a hard part of my life. After The Sequences I realized most dating advice rested on Fake Explanations, anti-reductionism, just-world bias, an... (read more)

The Rationality community [...] has been the main focus of Rationality [...] rationality's most famous infohazard [...] join the tribe [bolding mine]

I agree that explicit reasoning is powerful and that the lesswrong.com website has hosted a lot of useful information about COVID-19, but this self-congratulatory reification of "the community"—identifying "rationality" (!!) with this particular cluster of people who read each other's blogs—is super toxic. (Talk about "bucket errors"!) Our little robot cult does not have a monopoly on reason itself!

the young rationalist must navigate strange status hierarchies and bewildering memeplexes. I've seen many people bounce off the Rationalist community over those two things.

Great! If bright young people read and understand the Sequences and go on to apply the core ideas (Bayesian reasoning, belief as anticipated experience, the real reasons being the ones that compute your decision, &c.) somewhere else, far away from the idiosyncratic status hierarchy of our idiosyncratic robot cult, that's a good thing. Because it is about the ideas, not just roping in more warm bodies to join the tribe, right?!

Rationality has benefits for the individual, but there are additional enormous benefits that can be reaped if you have many people doing rationality together, building on each other's ideas. Moreover, ideally this group of people should, besides the sum of its individuals, also have a set of norms that are conductive for collective truth-seeking. Moreover, the relationships between them shouldn't be purely impersonal and intellectual. Any group endeavor benefits from emotional connections and mutual support. Why? First, to be capable of working on anything you need to be able to satisfy your other human needs. Second, emotional connections is the machinery we have for building trust and cooperation, and that's something no amount of rationality can replace, as long as we're humans.

Put all of those things together and you get a "tribe". Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn't disbanding the tribe, that's throwing away the baby with the bathwater. The solution is doing the hard work of establishing norms that make the tribe productive and beneficial.

Sure, tribes also carry dangers such as death spirals and other toxic dynamics. But the solution isn't disbanding the tribe, that's throwing away the baby with the bathwater.

I think we need to be really careful with this and the dangers of becoming a "tribe" shouldn't be understated w.r.t our goals. In a community focused on promoting explicit reason, it becomes far more difficult to tell apart those who are carrying out social cognition from those who are actually carrying out the explicit reason, since the object level beliefs and their justifications of those doing social cognition and those using explicit reason will be almost identical. Likewise, it becomes much easier to slip back into the social cognition mode of thought while still telling yourself that your still reasoning.

IMO, if we don't take additional precautions, this makes us really vulnerable to the dynamics described here. Doubly so the second we begin to rack up any kind of power, influence or status. Initially everything looks good and everyone around you seems to be making their way along The Path^T^M. But slowly you build up a mass of people who all agree with you on the object level but who acquired their c... (read more)

The problems you discuss are real, but I don't understand what alternative you're defending. The choice is not having society or not having society. You are going to be part of some society anyway. So, isn't it better if it's a society of rationalists? Or do you advocate isolating yourself from everyone as much as possible? I really doubt that is a good strategy.

In practice, I think LessWrong has been pretty good at establishing norms that promote reason, and building some kind of community around them. It's far from perfect, but it's quite good compared to most other communities IMO. In fact, I think the community is one of the main benefits of LessWrong. Having such a community makes it much easier to adopt rational reasoning without becoming completely isolated due to your idiosyncratic beliefs.

So full disclosure, I'm on the outskirts of the rationality community looking inwards. My view of the situation is mostly filtered through what I've picked up online rather than in person.

With that said, in my mind the alternative is to keep the community more digital, or something that you go to meetups for, and to take advantage of societies' existing infrastructure for social support and other things. This is not to say we shouldn't have strong norms, the comment box I'm typing this in is reminding me of many of those norms right now. But the overall effect is that rationalists end up more diffuse, with less in common other than the shared desire for whatever it is we happen to be optimizing for. This in contrast to building something more like a rationalist community/village, where we create stronger interpersonal bonds and rely on each other for support.

The reason I say this is because as I understood it, the rationalist (at least the truth seeking side) came out of a generally online culture, where disagreement is (relatively) cheap, and individuals in the group don't have much obvious leverage over one another. That environment seems to have been really good for allowing p... (read more)

I think you are both right about important things, and the problem is whether we can design a community that can draw benefits of mutual support in real life, while minimising the risks. Keeping each other at internet distance is a solution, but I strongly believe it is far from the best we can do.

We probably need to accept that different people will have different preferences about how strongly involved they want to become in real life. For some people, internet debate may be the optimal level of involvement. For other people, it would be something more like the Dragon Army. Others will want something in between, and probably with emphasis on different things, e.g. more about projects and less about social interaction versus more about social interaction and less about projects. (Here, social interaction is my shortcut for solving everyday problems faced by individual people where they are now, as opposed to having a coherent outside-oriented project.)

But with different levels of involvement, there is a risk that people on some level would declare people on a different level to be "not true rationalists". (Those with low involvement are not true rationalists, because they only wan... (read more)

5Hazard
This feels like an incredibly important point, the pressures when "the rationalists" are friends your debate with online vs when they are close community you are dependant on.
3Vanessa Kosoy
First, when Jacob wrote "join the tribe", I don't think ey had anything as specific as a rationalist village in mind? Your model fits the bill as well, IMO. So what you're saying here doesn't seem like an argument against my objection to Zack's objection to Jacob. Second, specifically regarding Crocker's rules, I'm not their fan at all. I think that you can be honest and tactful at the same time, and it's reasonable to expect the same from other people. Third, sure, social and economic dependencies can create problems, but what about your social and economic dependencies on non-rationalists? I do agree that dilution is a real danger (if not necessarily an insurmountable one). I will probably never have the chance to live in a rationalist village, so for me the question is mostly academic. To me, a rationalist village sounds like a good idea in expectation (for some possible executions), but the uncertainty is great. However, why not experiment? Some rationalists can try having their own village. Many others wouldn't join them anyway. We would see what comes out of it, and learn.
4FactorialCode
I'm breaking this into a separate thread since I think it's a separate topic. So I disagree. Obviously you can't impose Croker's rules on others, but I find it much easier and far less mentally taxing to communicate with people I don't expect to get offended. Likewise, I've gained a great deal of benefit from people very straightforwardly and bluntly calling me out when I'm dropping the ball, and I don't think they would have bothered otherwise since there was no obvious way to be tactful about it. I also think that there are individuals out there that are both smart and easily offended, and with those individuals tact isn't really an option as they can transparently see what you're trying to say, and will take issue with it anyways. I can see the value of "getting offended" when everyone is sorta operating on simulacra level 3 and factual statements are actually group policy bids. However, when it comes to forming accurate beliefs, "getting offended" strikes me as counter productive, and I do my best to operate in a mode where I don't do it, which is basically Croker's rules.
5Vanessa Kosoy
This might be another difference of personalities, maybe Crocker's rules make sense for some people. The problem is, different people have conflicting interests. If we all had the same utility function then, sure, communication would be only about conveying factual information. But we don't. In order to cooperate, we need not only to share information, but also reassure each other we are trustworthy and not planning to defect. If someone criticizes me in a way that disregards tact, it leads me to suspect that eir agenda is not helping me but undermining my status in the group. You can say, we shouldn't do that, that's "simulacra" and simulacra=bad. But the game theory is real, and you can't just magic it away by wishing it would be different. You can try just taking on faith that everyone are your allies, but then you'll get exploited by defectors. Or you can try to come up with a different set of norms that solves the problem. But that can't be Crocker's rules, at least it can't be only Crocker's rules. Now, obviously you can go too far in the other direction and stop conveying meaningful criticism, or start dancing around facts that need to be faced. That's also bad. But the optimum is in the middle, at least for most people.
2FactorialCode
So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there's no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons. I think that's a big part of it. Especially IRL, I've taken quite a few steps over the course of years to mitigate the trust issues you bring up in the first place, and I rely on social circles with norms that mitigate the downsides of Crocker's rules. A good combination of integrity+documentation+choice of allies makes it difficult to criticize someone legitimately. To an extent, I try to make my actions align with the values of the people I associate myself with, I keep good records of what I do, and I check that the people I need either put effort into forming accurate beliefs or won't judge me regardless of how they see me. Then when criticism is levelled against myself and or my group, I can usually challenge it by encouraging relevant third parties to look more closely at the underlying reality, usually by directly arguing against what was stated. That way I can ward off a lot of criticism without compromising as much on truth seeking, provided there isn't a sea change in the values of my peers. This has the added benefit that it allows me and my peers to hold each other accountable to take actions that promote each others values. The other thing I'm doing that is both far easier to pull off and way more effective, is just to be anonymous. When the judging party can't retaliate because they don't know you IRL and the people calling the shots on the site respect privacy and have very permissive posting norms, who cares what people say about you? You can take and dish out all the criticism you wan
2FactorialCode
So my objection definitely applies much more to a village than less tightly bound communities, and Jacob could have been referring to anything along that spectrum. But I brought it up because you said: This is where the objection begins to apply. The more interdependent the group becomes, the more susceptible it is to the issues I brought up. I don't think it's a big deal in an online community, especially with pseudonyms, but I think we need to be careful when you get to more IRL communities. With a village, treating it like an experiment is good first step, but I'd definitely be in the group that wouldn't join unless explicit thought had been put in to deal with my objections, or the village had been running successfully for long enough that I become convinced I was wrong. So in this case individual rationalists can still be undermined by their social networks, but theres a few reasons this is a more robust model. 1) You can have a dual-identity. In my case most of the people I interact with don't know what a rationalist is, I either introduce someone to the ideas here without referencing this place, or I introduce them to this place after I've vetted them. This makes it harder for social networks to put pressure on you or undermine you. 2) A group failure of rationality is far less likely to occur when doing so requires affecting social networks in New York, SF, Singapore, Northern Canada, Russia, etc., then when you just need to influence in a single social network.
2Vanessa Kosoy
Hmm, at this point it might be just a difference of personalities, but to me what you're saying sounds like "if you don't eat, you can't get good poisoning". "Dual identity" doesn't work for me, I feel that social connections are meaningless if I can't be upfront about myself. I guess? But in any case there will many subnetworks in the network. Even if everyone adopt the "village" model, there will be many such villages.
2FactorialCode
That's probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don't completely trust, and for most practical intents and purposes I'm quite a bit more "myself" online than IRL. I think that's easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another: Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there's a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.

I think the point is to vigilantly keep track of the distinction between skills and tribes, to avoid any ambiguity in use of these different and opposed things, to never mention one in place of the other.

Skills and tribes are certainly different things, I'm not sure why are they opposed things? We should keep track the distinction and at the same time continue building a beneficial tribe. I agree that in terms of terminology, "rationalist" is a terrible name for "member of the LessWrong-ish community" and we should use something else (e.g. LessWronger).

They are opposed in the sense that using one in place of the other causes trouble. For example, insisting on meticulous observation of skills would be annoying and sometimes counterproductive in a tribe, and letting tribal dynamics dictate how skills are developed would corrode quality.

8Vanessa Kosoy
A tribe shouldn't insist on a meticulous observation of skills, broadly speaking, but it should impose norms on e.g. which rhetorical moves are encouraged/discouraged in a discussion, and it should create positive incentives for the meticulous observation of skills. As to letting tribal dynamics dictate how skills are developed, I think we don't really have a choice there. People are social animals and everything they do and think is strongly effected by the society they are in. The only choice is trying to shape this society and those dynamics to make them beneficial rather than detrimental.

This might be possible, but should be specific to particular groups, unless there is a recipe for reproducing the norms. It's very easy for any set of beneficial norms to be trampled by tribal dynamics. The standard story is loss of fidelity, with people who care about the mission somewhat less, or who are not as capable of incarnating its purpose, coming to dominate a movement. At that point, observation of the beneficial norms turns into a cargo cult.

Thus the phenomenon of tribes seeks to destroy the phenomenon of skills. This applies to any nuanced purpose, even when it's the founding purpose of a tribe. Survival of a purpose requires an explanation, which won't be generic tribal dynamics or a set of norms helpful in the short term.

everything they do and think is strongly affected by the society

A skill-aspected tribe uses its norms to police how you pursue skills. Tribes whose identity is unrelated to pursuit of same skills won't affect this activity strongly.

4Vanessa Kosoy
I don't think it's "the phenomenon of tribes", I think it's a phenomenon of tribes. Humans virtually always occupy one tribe or another, so it makes no more sense to say that "tribes destroy skills" than, for example, "DNA destroys skills". There is no tribeless counterfactual we can compare to. I think any tribe affects how you pursue skills by determining which skills are rewarded (or punished), and which skills you have room to exercise.
7Ben Pace
It is definitely the case, especially in the EA community, that I'm surrounded by a lot more people who express alliance via signaling and are making nontrivial commitments, for whom I've not seen real evidence that they understand how to think for themselves or take right action without a high status person telling them to do it.  That said I don't find it too hard myself to distinguish between such people, and people where I can say "Yeah, I've seen them do real things".
6Kenny
Music isn't the sole domain of people that are particular interested in it either but it doesn't seem "super toxic" that they might consider themselves to be, let alone refer to themselves as, 'music people'. It seems like a natural shorthand given that that is the topic or subject around which they've organized. And yes, it is – mostly – about the ideas. I've only been to a few meetups and generally prefer to read along and occassionally comment, but I'm open to 'joining the tribe' (or some 'band' closeby) too because it is nice to be able to socialize with people that think similarly and about the same topics. The examples in the post about people bouncing off the community also seemed to be cases where they were bouncing off the ideas too.

The point is, the analogy fails because there is no "music people tribe" with "music meetups" organized at "MoreMusical.com". There is no Elizier Yudkowsky of "music tribe" (at most, everyone who appreciates the Western classical music has heard about Beethoven maybe) nor idea that people familiar with main ideas of music have learned them from a small handful of "music sequences" and interconnected resources that reference each other.

Picking at one particular point in the OP, there are no weird sexual dynamics of music (some localized groups or cultures might have, eg. one could talk about sexual culture in rock music in general, and maybe the dynamics at a particular scene, but they are not central to the pursuit of all of music, and even at the local level the culture is often very diffuse).

Music is widespread. There are several cultures of music that intersect with the wider society : no particular societal group has any claim of monopoly on teaching appreciation or practice of music. There is so much music that there are economies of music. There are many academies, even more teachers, untold amount of people who have varying expertise in playing instruments who apply them for... (read more)

2Kenny
Yes, there is no single 'music people tribe' but there are very much tribes for specific music (sub-)genres. (Music is huge!) But as you point out, there are people of 'similar' stature in music generally; really much greater stature overall. And 'music' is much much much older than 'rationality'. (Music is older than history!) And I'd guess it's inherently more interesting to many many more people too. I don't consider 'the sequences' or LW to be essential, especially now. The same insights are available from a lot of sources already and this should be more true in the future. It was, and perhaps is, a really good intro to what wasn't previously a particularly coherent subject. Actual 'rationality' is everywhere. There was just no one persistently pointing at all of the common phenomena, or at least not recently and in a way that's accessible to (some) 'laypeople'. But I wouldn't be surprised if there is something like a 'music sequences', e.g. a standard music textbook. I'd imagine 'music theory' or music pedagogy are in fact "interconnected resources that reference each other". Again, if it wasn't already clear, the LW sequences are NOT essential for rationality. There's no weird "sexual dynamics" in rationality – based on MY experience. I don't know why the people that publically write about that thing must define everyone else that's part of the overall network. I certainly don't consider any of it central to rationality. I don't even know that "weird sexual dynamics" is a common feature of LW meetups, let alone other 'rationality'-related associations. Rationality, in the LW sense, could be all of these things. At least give it a few hundred years! Music is old. And no one has a monopoly on rationality. If anything, LW-style rationality is competing with everything else; almost everything else is implicitly claiming to help you either believe truths or act effectively. I agree! We should definitely try to become 'background knowledge' or at least as
4Aaro Salosensaari
I do not feel like writing a point by point response, it seems we are in agreement over many issues but maybe not all. Some paragrah-sized points I want to elaborate on, however: 1 If it is not clear, in my comment I attempted not to argue against your positions in particular. It was more in the support of the idea expressed upthread that building too much of the attitude of there being an identifiable "Rationality Tribe" is a net negative. (1b Negative both to the objective of raising general societal sanity waterline and the tribespeople's ability of it. Especially I feel the point -- cant find link to comment with my phone -- how in a close-knit society where many opinons obtained by explicit thought are expressed, it can become difficult disengtangle which of my individual opinions I have obtained by my own explicit thought and agreeing with others because I agree with the logic, or which opinions I am agreeing with because of my social mind wants agree or disagree with some specific individuals or "group consensus") 2 One of the reasons I picked the sexual dynamics because OP mentions it in a figure caption as a joke. Nevertheless, it is an indication that at least in the OP the Tribe in question is not thought as existing in eg some abstract idea space but as a specific group of people living near each other enough to have sexual dynamics. 3 I find myself disagreeing with the idea that rationality-in-general (in contrast with LW-originated social group) is a new innovation. In near history perspective the first example that comes to mind, John Allen Paulos published Innumecary in 1988 ; I read it as a kid in 00s when I had no internet and LW did not exist, but it tickled the same parts of my brain as many ideas about putting numbers on arguments floating in LW-adjacent thoughtspace. In long-term history perspective, I'd make an argument that attempt at improving human ability at rational thought is part of the grand scientific project and tradition that g

The signaling commons is full of noise of the form 'do this thing and you'll win.' What do costly signals look like here? Many of the traditional ones have fallen apart as things are changing so fast that no one listens to older folks who could in the past have told you the outcomes of different strategies.

5ioannes
NXIVM had much recruiting success by training people on techniques that actually helped them quickly solve their present problems. (NXIVM is a deeply problematic organization which contained a secret cult and in many ways should not be emulated.)

The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.

...

We started discussing the virus and raising the alarm in private back in January. By late February, as American health officials were almost unanimously downplaying the threat, we wrote posts on taking the disease seriously, buying masks, and preparing for quarantine.

...

The rationalists pwned COVID

This isn't true. We did see it coming more clearly than most of the governmental authorities and certainly were ahead of public risk communication, but we were on average fairly similar or even a bit behind the actual domain experts.

This article summarizes interviews with epidemiologists on when they first realized COVID-19 was going to be a huge catastrophe and how they reacted. The dates range from January 15th with the majority in mid-late February. See also this tweet from late February, from a modeller working of the UK's SAGE, confirming he thinks uncontrolled spread is taking place.

I have an email dated 27 Feb 2020 replying to a colleague:

... (read more)
5snog toddgrass
Thanks for this well researched comment. I believe you that the experts rationalize their behavior like so. The problem is that underselling a growing emergency was a terrible advocacy plan. Maybe it covered their asses, but it screwed over their stakeholders by giving us less time to prepare. Their argument really proves too much. For example, the Wuhan provincial government could also use it to justify the disastrous coverup.
5Jacob Falkovich
Yes, really smart domain experts were smarter and earlier but, as you said, they mostly kept it to themselves. Indeed, the first rationalists picked up COVID worry from private or unpublicized communication with domain experts, did the math and sanity checks, and started spreading the word. We did well on COVID not by outsmarting domain experts, but by coordinating publicly on what domain experts (especially any with government affiliations) kept private.

I remember this post very fondly. I often thought back to it and it inspired some thoughts of my own about rationality (which I had trouble writing down and are waiting in a draft to be written fully some day). I haven't used any of the phrases introduced here (Underperformance Swamp, Sinkholes of Sneer, Valley of Disintegration...), and I'm not sure whether it was the intention.

The post starts with the claim that rationalists "basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts". Since it's not the point of the post I won't review this claim in depth, but it seems basically true to me. Elizabeth's review here gives a few examples.

This post is about the difficulty and even danger in becoming a rationalist, or more generally, in using explicit reasoning (Intuition and Social Cognition being the alternatives).

The first difficulty is that explicit reasoning alone often fails to outperform intuition and social cognition where those perform well. I think this is true, and as the rationality community evolved it came to appreciate intuition and social cognition more, without devaluing explicit re... (read more)

Gosh, I love this post immediately. Thanks for saying all of these things. I don't know why nobody said them all at once before. I expect to link this to friends a bunch in the future.

Curated. A lot of this was valuable, simple-language discussion of rationality, and the difficulties and costs associated with trying to become more rational. I expect this will inform my discussion of rationality going forward, and I'll likely link to it a lot. Furthermore, it's a very well put-together post. The images are great, the names are catchy, and it's very readable. I also found valuable much of the discussion under the post.

There were somewhat navel-gazing and narrative-building elements at the start that I'm not interested in curating, and for... (read more)

The Rationality community was never particularly focused on medicine or epidemiology. And yet, we basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts.

Based on anecdotal reports, I'm not convinced that rationalist social media early on is substantially better than educated Chinese social media. I'm also not convinced that I would rather have rationalists in charge of the South Korean or Taiwanese responses than the actual people on the ground.

It's probable that this... (read more)

I was behind the curve on COVID, but the Seattle Rationalists are my tribe, so the social cognition I got from them had me leading the pack with the other groups I interact with (ie. pushing my company to start work-from-home earlier, and getting extended family to cancel large family events).

Many ideas in this post can be broadened to paradigm shifts in general. Painters and mystics traverse similar obstacles.

Taking the pandemic seriously was not local to the rationalist community. Many people, including my father, began to take the pandemic seriously in late January. He avoided any major travel, and largely remained at home. He began to wear masks from early March, when cases were few. This is in India, not USA.

[-]JJC10

In my experience, most people hear/see/think/believe what they want to hear/see/think/believe. Searching for validation is a lot more fun than seeking an uncomfortable counter factual, exposing possible weaknesses or errors in your thinking, this, your position. It’s humbling and hard work to want to know if you’re wrong, but for some it’s the only satisfying path. Looking forward to being a little less wrong tomorrow...

I'll b honest, I almost stopped reading when the you said "Throughout March, the CDC was telling people not to wear masks and not to get tested unless displaying symptoms." as an example of how they got it wrong.

The reality is they did not encourage people to buy masks initially, because the very credible concern was that the public would hoard masks that were in short supply for people who absolutely needed them immediately. As soon as supplies were available, they recommended getting them for the public.

And similarly, the shortage of testing drove the ve... (read more)

[+][comment deleted]-60