Introduction:

The (highly interrelated) effective altruist and Rationalist communities are very small on a global scale. Therefore, in general, most intelligence, skill and expertise is outside of the community, not within it.

I don’t think many people will disagree with this statement. But sometimes it’s worth reminding people of the obvious, and also it is worth quantifying and visualizing the obvious, to get a proper feel for the scale of the difference. I think some people are acting like they have absorbed this point, and some people definitely are not.

In this post, I will try and estimate the size of these communities. I will compare how many smart people are in the community vs outside the community. I will do the same for people in a few professions, and then I will go into controversial mode and try and give some advice that I think might follow from fully absorbing these results.

How big is EA and rationalism

To compare EA/rats against the general population, we need to get a taste for the size of each community. Note that EA and Rationalism are different, but highly related and overlapping communities, with EA often turning to rationalism for it’s intellectual base and beliefs.

It is impossible to give an exact answer to the size of each community, because nobody has a set definition of who counts as “in” the community. This gets even more complicated if you factor in the frequently used “rationalist adjacent” and “EA adjacent” self labels. For the sake of this article, I’m going to grab a pile of different estimates from a number of sources, and then just arbitrarily pick a number.

Recently JWS placed an estimate of the EA size as the number of “giving what we can” pledgees, at 8983. Not everyone who is in EA signs the pledge. But on the flipside, not everyone who signs the pledge is actively involved in EA.

A survey series from 2019 tried to estimate this same question. They estimated a total of 2315 engaged EA’s, 6500 active EA’s in the community, with 1500 active on the EA forum.

Interestingly, there has also been an estimate of how many people had even heard of effective altruism, finding 6.7% of the US adult population, and 7.9% of top university students, had heard of EA. I would assume this has gone up since then due to various scandals.

We can also look at the subscriber counts to various subreddits, although clicking a “subscribe” button once is not the same thing as being a community member. It also doesn’t account for people that leave. On the other hand, not everyone in a community joins their subreddit.

r/effectivealtruism: 27k

r/slatestarcodex: 64k

r/lesswrong: 7.5k

The highest estimate here would be the r/slatestarcodex, however this is probably inflated by the several year period when the subreddit hosted a “culture war” thread for debate on heated topics, which attracted plenty of non-rationalists. Interestingly, the anti-rationalist subreddit r/sneerclub had 19k subscribers, making it one of the most active communities related to rationalism before the mods shut it down.

The (unofficial?) EA discord server has around 2000 members.

We can also turn to surveys:

The lesswrong survey in 2023 only had 588 respondents, of which 218 had attended a meetup.

The EA survey in 2022 had 3567 respondents.

The Astralcodexten survey for 2024 had 5,981 respondents, of which 1174 were confirmed paid subscribers and 1392 had been to an SSC meetup.

When asked about what they identified as, 2536 people identified as “sort of “ Lesswrong identified, while only 736 definitively stated they were. Similarly, 1908 people said they were “sort of” EA identified, while 716 identified as such without qualification. I would expect large overlap between the two groups.

These surveys provide good lower bounds for community size. They tell us less about what the total number is. These latter two surveys were generally well publicised on their respective communities, so I would assume they caught a decent chunk of each community, especially the more dedicated members. One article estimated a 40% survey response rate when looking at EA orgs.

The highest estimate I could find would be the number of twitter followers of Lesswrong founder Eliezer yudkowsky, at 178 thousand . I think it would be a mistake to claim this as a large fraction of the number of rationalists: a lot of people follow a lot of people, and not every follow is actually a real person. Stephen pinker, for example, has 850 thousand followers, with no large scale social movement at his back.

I think this makes the community quite small. For like to like comparisons, the largest subreddit (r/slatestarcodex on 64k) is much smaller than r/fuckcars, a community for people who hate cars and car-centric design with 443k users. Eliezer has half as many twitter followers as this random leftist with a hippo avatar. The EA discord has the same number of members as the discord for this lowbrow australian comedy podcast.

Overall, going through these numbers, I would put the number at about 10,000 people who are sorta in the EA or rationalist community, but I’m going to round this all the way up to 20,000 to be generous and allow for people who dip in and out. If you only look at dedicated members the number is much lower, if you include anybody that has interacted with EA ever the number would be much higher.

Using this generous number of 20000, we can look at the relative size of the community.

Looking at the whole planet, with roughly 8 billion people, for every 1 person in this community, there are about four hundred thousand people out of it. If we restrict this to the developed world (which almost everybody in EA comes from), this becomes 1 in 60000 people.

If we go to just America, we first have to estimate what percentage are in the US. The surveys have yielded 35% in the EA survey, 50% in the lesswrong survey, 57% in the astralcodex survey. For the purpose of this post, I’ll just put it at 50% (it will not signficantly affect the results). So, in the US, the community represents 1 in 33,000 people.

What about smart people?

Okay, that’s everyone in general. But what if we’re only interested in smart people?

First off, I want to caveat that “IQ” and “intelligence” are not the same thing. Equating the two is a classic example of the spotlight effect: IQ scores are relatively stable and so you can use them for studies, whereas intelligence is a more nebulous, poorly defined concept. If you score low on IQ tests, you are not doomed, and if you score highly on IQ tests, that does not make you smart in the colloquial sense: over the years I have met many people with high test scores that I would colloquially call idiots. See my mensa post for further discussion on this front.

However, IQ is easier to work with for statistics, so for the sake of this section, let’s pretend to be IQ chauvinists, that only care about people with high IQ’s.

This seems like it would benefit the rationalist community a lot. The rationalist community is known for, when asked on surveys, giving very high IQ scores, with medians more than two SD’s above average. This has been the source of some debate, Scott alexander weighed in on the debate gave his guess as an average of 128, which would put the median rationalist at roughly MENSA level. This still feels wrong to me: if they’re so smart, where are the nobel laureates?  The famous physicists? And why does arguing on Lesswrong make me feel like banging my head against the wall? However, these exact same objections apply to MENSA itself, with an even larger IQ average. So perhaps they really are that smart, it’s just that high IQ scores, on their own, are not all that useful. 

Let’s take the 20000 or so rat/Eaers, and take the 128 median IQ estimate at face value, assuming the same standard deviation, and compare that to the general population of the developed world (roughly 1.2 billion people), assuming an average IQ of 100. We will assume a normal distribution and SD of 15 for both groups. Obviously, this is a rough approximation. Now we can put together a histogram of how many people are at each IQ level, restricting our view to only very smart people with more than 130 IQ:

I promise you I didn’t forget the rationalists! To find them, let’s zoom in by a factor of one thousand:

In numbers, the community has roughly 1 in 10000 of the MENSA level geniuses in the developed world. Not bad, but greatly drowned out by the greater public.

Just for fun, let’s compare this to another group of devoted fanatics: Swifties.

A poll estimated that 22% of the US population identify as taylor swift fans, which is 66 million people. Let’s classify the top 10% most devoted fans as the “swifties”, which would be 6.6 million in the US alone. The Eras tour was attended by 4.35 million people, and that was famously hard to get tickets for, so this seems reasonable. Let’s say they have the same IQ distribution as the general population, and compare them to the US Rationalist/EA community:

The Swifties still win on sheer numbers. According to this (very rough) analysis, we would expect roughly 30 times as many genius swifties as there are genius rationalists. If only this significantly greater intellectual heft could be harnessed for good! Taylor, if you’re reading this, it’s time to start dropping unsolved mathematical problems as album easter eggs.

Note that I restricted this to the developed world for easier comparison, but there are significantly more smart people in the world at large.

Share of skilled professionals

Ok, let’s drop the IQ stuff now, and turn to skilled professionals.

The lesswrong survey conveniently has a breakdown of respondents by profession:

Computers (Practical): 183, 34.8%
Computers (AI): 82, 15.6%
Computers (Other academic): 32, 6.1%
Engineering: 29, 5.5%
Mathematics: 29, 5.5%
Finance/Economics: 22, 4.2%
Physics: 17, 3.2%
Business: 14, 2.7%
Other “Social Science”: 11, 2.1%
Biology: 10, 1.9%
Medicine: 10, 1.9%
Philosophy: 10, 1.9%
Law: 9, 1.7%
Art: 7, 1.3%
Psychology: 6, 1.1%
Statistics: 6, 1.1%
Other: 49, 9.3%

I’ll note that this was from a relatively small sample, but it was similar enough to the ACX survey that it seems reasonable. We will use the earlier figure of 10000 rat/Eas in the US, and assume the percentages in the survey are the same for the entire community.

I’ll look at three different professions: law, physics, and software.

Physicists:

The number of physicists in total in the US is 21000. It’s quite a small community. The percentage of physicists in the surveys 3.2%, while low, is much higher than it is in the general population (0.006%), perhaps as a result of having lots of nerds and having a pop-science focus.

We will use the estimate of 10000 Rationalist/Eaers in the US. 3.2% of this is 320 physicists. This seems kinda high to me, but remember that I’m being generous with my definition here: the number that are actively engaged on a regular basis is likely to be much lower than that (for example, only 17 physicists answered the lesswrong survey). It also may be that people are studying physics, but then going on to do other work, which is pretty common.

Despite all of that, this is still a small slice of the physicist population.

Lawyers:

The number of lawyers in the US is roughly 1.3 million, or 0.4% of the population. In rationalism/EA, this number is higher, at 1.7%. I think basically all white collar professions are overrepresented due to the nerdy nature of the community.

Going by our estimate, the number of lawyers in the US rationalist/EA community is about 170. Graphing this against the 1.3 million lawyers:

The lawyers in the community make up a miniscule 0.01% of the total lawyers in the US.

Software engineers:

4.4 million software engineers in the US, or 1.3% of the population. In contrast, 56.5% of the lesswrong survey worked with computers, another dramatic overrepresentation, reflecting the particular appeal of the community to tech nerds. This represents an estimated 5650 software engineers in the US.

This means that despite the heavy focus on software and AI, the community still only make up 0.13% of the population of software engineers in the US.

When you’re in an active community, where half the people are software engineers discussing AI and software constantly, it can certainly feel like you have a handle on what software engineers in general think about things. But beware: You are still living in that tiny slice up there, a slice that was not randomly selected. What is true for your bubble may not be true for the rest of the world.

Effective altruism:

Okay, these are general categories. How about if we just looked at the entire point of EA: effective altruism: evidence based philanthropy.

Well, why don’t we look at a different “effective altruist” organisation: the Gates foundation. You can disagree about the decisions and methods of the Gates foundation, but I think it’s clear that they think they are doing altruism that is evidence based and effective, and trying to find the best things to do with the money. They are doing a lot of similar actions to EA, such as large donation pledges. Both groups have been credited for saving lots of lives with targeted third world interventions, but have also been subject to significant scrutiny and criticism.

A post from 2021 (before the FTX crash) estimated there were about 650 people employed in EA orgs. The Gates foundation has 1818 employees:

 

This one organisation still has more people than every EA organisation. Add to that all the other similar orgs, and the rise of the randomista movement being the most influential force in development aid, and it seems that the Effective Altruism movement only makes up a small fraction of “effective altruists”.

Are there any areas where EA captures the majority of the talent share? I would say the only areas are in fields that EA itself invented, or is close to the only group working on, such as longtermism or wild animal suffering.

Some advice for small communities:

Up till, now we’ve been in the “graphing the obvious” phase, where I establish that there is much more talent and expertise outside the community than in it. This applies to any small group, it does not mean the group is doomed and can’t accomplish anything.

But I think there quite a few takeaways for a community that actually internalizes the idea of being a small fish in a big pond. I’m not saying people aren’t doing any of these already, for example I’ve seen a few EA projects working in partnership with leading university labs, which I think is a great way to utilise outside expertise.

Check the literature

Have you ever had a great idea for an invention, only to look it up and find out it was already invented decades ago?

There are so many people around, that chances are that any new idea you have will already have been thought up already. Academia embraces this fact thoroughly with it’s encouragement of citations. You are meant to engage with what has already been said or done, and catch yourself up. Only then can you identify the gaps in research that you can jump into. You want to be standing on the shoulders of giants, not reinventing the wheel.

This saves everybody a whole lot of time. But unfortunately a lot of articles in the ea/rat community seem to only cite or look at other blog posts in the same community. It has a severe case of “not invented here” syndrome. This is not to say they don’t ever cite outside sources, but when they do the dive is often shallow or incomplete. Part of the problem is that rationalists tend to valorise coming up with new ideas more than they do finding old ones.

Before doing any project or entering any field, you need to catch up on existing intellectual discussion on the subject. This is easiest to find in academia as it’s formally structured, but there is plenty to be found in books, blogs, podcasts and whatever that are not within your bubble.

The easiest way to break ground is to find underexplored areas and explore them. The amount of knowledge that is known is vast, but the amount that is unknown is far, far vaster.

Outside hiring

There have been a few posts on the EA forum talking about “outside hiring” as a somewhat controversial thing to encourage. I think the graphs above are a good case for outside hiring.

If you don’t consider outside hires, you are cutting out 99% or more of your applicant pool. This is very likely to exclude the best and most qualified people for the job, especially if the job is something like a lawyer for the reasons outlined above.

Of course, this is job dependent, and there are plenty of other factors involved, such as culture fit, level of trust, etc, that could favour the in-community members. But just be aware that a decision to only hire or advertise to ingroup members comes at a cost.

Consult outside experts

A while back I did an in depth dive into drexlerian nanotech. As a computational physicist, I had a handle on the theoretical side, but was sketchy on the experimental side. So I just… emailed the experimentalist. He was very friendly, answered all my questions, and was all around very helpful.

I think you’d be surprised at how many academics and experts are willing to discuss their subject matter. Most scientists are delighted when outsiders show interest in their work, and love to talk about it.

I won’t name names, but I have seen EA projects that failed for reasons that are not obvious immediately, but in retrospect would have been obvious to a subject matter expert. In these cases a mere one or two hour chat with an existing outside expert could save months of work and thousands of dollars.

The amount of knowledge outside your bubble outweighs the knowledge within it by a truly gargantuan margin. If you entering a field that you are not already an expert in, it is foolish to not consult with the existing experts in that field. This includes paying them for consulting time, if necessary. This doesn’t mean every expert is automatically right, but if you are disagreeing with the expert about their field, you should be able to come up with a very good explanation as to why.

Insularity will make you dumber

It’s worth stating outright: Only talking within your ingroup will shut you off from a huge majority of the smart people of the world.

Rationalism loves jargon, including jargon that is just completely unnecessary. For example, the phrase “epistemic status” is a fun technique where you say how confident you are in a post you make. But it could be entirely replaced with the phrase “confidence level”, which means pretty much the exact same thing. This use of “shiboleths” is a fun way to promote community bonding. It also makes a lot of your work unreadable to people who haven’t been steeped in a movement for a very long ride, thus cleaving off the rationalist movement from the vast majority of smart and skilled people in society. It gets insulated from criticisism by the number of qualified people who can’t be arsed learning a new nerd language in order to tear apart your ideas.

If you primarily engage with stuff written in rationalist jargon, you are cutting yourself of from almost all the smart people in the world. You might object that you can still access the information in the outside world, from rationalists that look through it and communicate it in your language. But be clear that this is playing a game of chinese whispers with knowledge: whenever information is filtered through a pop science lens, it gets distorted, with greater distortion occurring if the pop science communicator is not themselves a subject matter expert. And the information you do see will be selected: the community as a whole will have massive blindspots of information.

Read widely, read outside your bubble. And when I say “read outside your bubble”, I don’t mean just reading people in your bubble interpreting work outside your bubble. And in any subject where all or most of your information comes from your bubble, reduce your confidence about that subject by a significant degree.

In group “consensus” could just be selection effects

Physicists are a notoriously atheistic bunch. According to one survey, 79% of physicists do not believe in god. But on the converse, that means that 21% of them do. if we go by the earlier figure of 20000 physicists in the US, means that there are 4200 professional physicists who believe in god. This is still a lot of people!

Imagine I started a forum aimed at high-IQ religious people. Typical activities involve reading bible passages, preaching faith, etc, which attracts a large population of religious physicists. I then poll the forum, and we find that we have hundreds of professional physicists, 95% of whom believe in God, all making convincing physics based arguments for that position. What can I conclude from this “consensus”? Nothing!

The point here is that the consensus of experts within your group will not necessarily tell you anything about the consensus of experts outside your group. This is easy to see in questions where we have polls and stuff, but on other questions may be secretly reinforcing false beliefs that just so happen to be selected for in the group.

Within Rationalism, the obvious filter is the reverence for “the sequences”, an extremely large series of pop science blogposts. In it’s initial form, Lesswrong.com was basically a fanforum for these blogs. So obviously people that like the sequences are far more likely to be in the community than those that don’t. As a result, there is a consensus within rationalism that the core ideas of sequences are largely true. But you can’t point to this consensus as good evidence that they actually are true, no matter how many smart ingroup members you produce to say this, because there could be hundreds of times as many smart people outside the community who think they are full of shit, and bounced off the community after a few articles. (obviously, this doesn’t prove that it’s false either).

You should be skeptical of claims only made in these spaces

In any situation where the beliefs of a small community are in conflict with the beliefs of outside subject matter experts, your prior (initial guess) should be high that your community is wrong.

It’s a simple matter: the total amount of expertise, intelligence, and resources outside of your community almost always eclipses the amount within it by a large factor. You need an explanation of why and how your community is overcoming this massive gap.

I’m not saying you can never beat the experts. For an example of a case where contrarianism was justified, we can take the replication crisis in the social sciences. Here, statisticians can justify their skepticism of the field: statistics is hard and social scientists are often inadequately trained in it, and we can look at the use of statistics and find that it is bad and could lead to erroneous conclusions. Many people were pointing out these problems before the crisis became publicized.

It’s not enough to identify a possible path for experts to be wrong, you also have to have decent evidence that this explanation is true. Like, you can identify that because scientists are more left wing, they might be more inclined to think that climate change is real and caused by humans. And it is probably true that this leads to a nonzero amount of bias in the literature. But is it enough bias to generate such an overwhelming consensus of evidence? Almost certainly not.

The default stance towards any contrarian stance should be skepticism. That doesn’t mean you shouldn’t try things: sometimes long shots actually do work out, and can pay off massively. But you should be realistic about the odds.

Conclusion

I’m not trying to be fatalist here, and say you shouldn’t try to discuss or figure things out, just because you are in a small community. Small communities can get a lot of impressive stuff done. But you should remember, and internalize, that you are small, and be appropriately humble in response. Your little bubble will not solve the all the secrets of the universe. But if you are humble and clever, and look outwards rather than inwards, you can still make a small dent in the vast mine of unknown knowledge.

New Comment
36 comments, sorted by Click to highlight new comments since:

(this comment is partly self-plagiarized from here)

Before doing any project or entering any field, you need to catch up on existing intellectual discussion on the subject.

I think this is way too strong. There are only so many hours in a day, and they trade off between

  • (A) “try to understand the work / ideas of previous thinkers” and
  • (B) “just sit down and try to figure out the right answer”.

It’s nuts to assert that the “correct” tradeoff is to do (A) until there is absolutely no (A) left to possibly do, and only then do you earn the right to start in on (B). People should do (A) and (B) in whatever ratio is most effective for figuring out the right answer. I often do (B), and I assume that I’m probably reinventing a wheel, but it’s not worth my time to go digging for it. And then maybe someone shares relevant prior work in the comments section. That’s awesome! Much appreciated! And nothing went wrong anywhere in this process! See also here.

A weaker statement would be “People in LW/EA commonly err in navigating this tradeoff, by doing too much (B) and not enough (A).” That weaker statement is certainly true in some cases. And the opposite is true in other cases. We can argue about particular examples, I suppose. I imagine that I have different examples in mind than you do.

~~

To be clear, I think your post has large kernels of truth and I’m happy you wrote it.

'Before doing any project or entering any field, you need to catch up on existing intellectual discussion on the subject.'

I think this is way too strong.

Still probably directionally correct, though, especially for the typical EA / rationalist, especially in AI safety research (most often relatively young and junior in terms of research experience / taste).

On the tradeoff between (A) “try to understand the work / ideas of previous thinkers” and
(B) “just sit down and try to figure out the right answer”, I think for A) might have already been made significantly easier by chatbots like Claude 3.5, while B) probably hasn't changed anywhere near as much. I expect the differential to probably increase in the near-term, with better LLMs. 
 

especially in AI safety research

This is insanely wrong; it's exactly opposite of the truth. If you want to do something cool in the world, you should learn more stuff from what other humans have done. If, on the other hand, you want to solve the insanely hard engineering/philosophy problem of AGI alignment in time for humanity to not be wiped out, you absolutely should prioritize solving the problem from scratch.

Reply3221

insanely wrong

I'd like to offer some serious pushback on the practice of using words like "insane" to describe positions that are not obviously false and which a great number of generally reasonable and well-informed members of the community agree with. It is particularly inappropriate to do that when you have offered no concrete, object-level arguments or explanations [1] for why AI safety researchers should prioritize "solving the problem from scratch." 

Adding in phrases like "it's exactly opposite of the truth" and "absolutely" not only fails to help your case, but in my view actively makes things worse by using substance-free rhetoric that misleads readers into thinking the case you are bringing forward is stronger than it actually is or that this matter is so obvious and trivial that they shouldn't even need to think very hard about it before taking your side.

  1. ^

    By which I mean, you have included no such arguments in this particular comment, nor have you linked to any other place containing arguments that you agree with on this topic, nor have you offered any explanations in any other comments on this post (I checked, and you have made no other comments on it yet), nor does a cursory look at your profile seem to indicate any posts or recent comments where such ideas might appear.

[-]TsviBT1-1

I disagree re/ the word "insane". The position to which I stated a counterposition is insane.

"it's exactly opposite of the truth" and "absolutely" not only fails to help your case, but in my view actively makes things worse by using substance-free rhetoric that misleads readers into thinking the case you are bringing forward is stronger than it actually is or that this matter is so obvious and trivial that they shouldn't even need to think very hard about it before taking your side.

I disagree, I think I should state my actual position. The phrases you quoted have meaning and conveys my position more than if they were removed.

I disagree, I think I should state my actual position. The phrases you quoted have meaning and conveys my position more than if they were removed.

It does not matter one bit if this is your "actual position". The point of community norms about discourse is that they constrain what is or isn't appropriate to say in a given situation; they function on the meta-level by setting up proper incentives for users to take into account when crafting their contributions here, independently of their personal assessments about who is right on the object-level. So your response is entirely off-topic, and the fact you expected it not to be is revealing of a more fundamental confusion in your thinking about this matter.

Moderation (when done properly) does not act solely to resolve individual disputes on the basis of purely local characteristics to try to ensure specific outcomes. Remember, the law is not an optimizer, but rather a system informed by principles of mechanism design that generates specific, legible, and predictable set of real rules about what is or isn't acceptable, a system that does not bend in response to the clever arguments of an individual who thinks that he alone is special and exempt from them.

As Duncan Sabien once wrote:

Standards are not really popular.  Most people don't like them.  Or rather, most people like them in the abstract, but chafe when they get in the way, and it's pretty rare for someone to not think that their personal exception to the standard is more justified than others' violations.  Half the people here, I think, don't even see the problem that I'm trying to point at.  Or they see it, but they don't see it as a problem.

I think it would have been weird and useless for you to straight-up lie in your previous comment, so of course you thought what you were saying communicated your real position. Why else would you have written it? But "communicate whatever you truly feel about something, regardless of what form your writing takes" is a truly terrible way of organizing any community in which meaningful intellectual progress is intended. By contrast, giving explanations and reasoning in situations where you label the beliefs of others as "insane" prevents conversations from becoming needlessly heated and spiraling into Demon Threads, while also building towards a community that maintains high-quality contributions. 

All of this stuff has already been covered in the large number of expositions people have given over the last few years on what LessWrong is about and what principles animate norms and user behavior (1, 2, 3, 4, 5, etc).

I doubt that we're going to get anything useful here, but as an indication of where I'm coming from:

  1. I would basically agree with what you're saying if my first comment had been ad hominem, like "Bogdan is a doo-doo head". That's unhelpful, irrelevant, mean, inflammatory, and corrosive to the culture. (Also it's false lol.)
  2. I think a position can be wrong, can be insanely wrong (which means something like "is very far from the truth, is wrong in a way that produces very wrong actions, and is being produced by a process which is failing to update in a way that it should and is failing to notice that fact"), and can be exactly opposite of the truth (for example, "Redwoods are short, grass is tall" is, perhaps depending on contexts, just about the exact opposite of the truth). And these facts are often knowable and relevant if true. And therefore should be said--in a truth-seeking context. And this is the situation we're in.
  3. If you had responded to my original comment with something like

"Your choice of words makes it seem like you're angry or something, and this is coming out in a way that seems like a strong bid for something, e.g. attention or agreement or something. It's a bit hard to orient to that because it's not clear what if anything you're angry about, and so readers are forced to either rudely ignore / dismiss, or engage with someone who seems a bit angry or standoffish without knowing why. Can you more directly say what's going on, e.g. what you're angry about and what you might request, so we can evaluate that more explicitly?"

or whatever is the analogous thing that's true for you, then we could have talked about that. Instead you called my relatively accurate and intentional presentation of my views as "misleading readers into thinking the case you are bringing forward is stronger than it actually is or that this matter is so obvious and trivial..." which sounds to me like you have a problem in your own thinking and norms of discourse, which is that you're requiring that statements other people make be from the perspective of [the theory that's shared between the expected community of speakers and listeners] in order for you to think they're appropriate or non-misleading.

  1. The fact that I have to explain this to you is probably bad, and is probably mostly your responsibility, and you should reevaluate your behavior. (I'm not trying to be gentle here, and if gentleness would help then you deserve it--but you probably won't get it here from me.)

I think a position can be wrong, can be insanely wrong (which means something like "is very far from the truth, is wrong in a way that produces very wrong actions, and is being produced by a process which is failing to update in a way that it should and is failing to notice that fact"), and can be exactly opposite of the truth (for example, "Redwoods are short, grass is tall" is, perhaps depending on contexts, just about the exact opposite of the truth). And these facts are often knowable and relevant if true. And therefore should be said--in a truth-seeking context.

I agree to some extent, which is why I said the following to gears:

  1. It is fine[2] to label opinions you disagree with as "insane".
  2. It is fine to give your conclusions without explaining the reasons behind your positions.[3]
  3. It is not fine to do 1 and 2 at the same time.

The fact that you chose the word "insane" to describe something that did not seem obviously false, had a fair bit of support in this community, and that you had not given any arguments against at the time was the problem. 

The fact that you think something is "insane" is informationally useful to other people, and, all else equal, should be communicated. But all else is not equal, because (as I explained in my previous comments), it is a fabricated option to think that relaxing norms around the way in which particular kinds of information is communicating will not negatively affect the quality of the conversation that unfolds afterwards.

So you could (at least in my view, not sure what the mods think) say something is "insane" if you explain why, because this allows for opportunities to drag the conversation away from mud-slinging Demon Threads and towards the object-level arguments being discussed (and, in this case, saying you think your interlocutor's position is crazy could actually be helpful at times, since it signals a great level of disagreement and allows for the quicker identification of how many inferential distances between you and the other commenters). Likewise, you could give your conclusions without presenting arguments or explanations for it, as long as your position is not stated in an overly inflammatory manner, because this then incentivizes useful and clear-headed discourse later on when users can ask what the arguments actually are. But if you go the third route, then you maximize the likelihood of the conversation getting derailed.

"Your choice of words makes it seem like you're angry or something, and this is coming out in a way that seems like a strong bid for something, e.g. attention or agreement or something. It's a bit hard to orient to that because it's not clear what if anything you're angry about, and so readers are forced to either rudely ignore / dismiss, or engage with someone who seems a bit angry or standoffish without knowing why. Can you more directly say what's going on, e.g. what you're angry about and what you might request, so we can evaluate that more explicitly?"

This framing focuses on the wrong part, I think. You can be as angry as you want to when you are commenting on LessWrong, and it seems to be inappropriate to enforce norms about the emotions one is supposed to feel when contributing here. The part that matters is whether specific norms of discourse are getting violated (about the literal things someone is writing, not how they feel in that moment), in which case (as I have argued above) I believe the internal state of mind of the person violating them is primarily irrelevant.

you have a problem in your own thinking and norms of discourse

I'm also not sure what you mean by this. You also implied later on that "requiring that statements other people make be from the perspective of [the theory that's shared between the expected community of speakers and listeners] in order for you to think they're appropriate" is wrong, which... doesn't make sense to me, because that's the very definition of the word appropriate: "meeting the requirements [i.e. norms] of a purpose or situation." 

The same statement can be appropriate or inappropriate, depending on the rules and norms of the community it is made in.

[-]TsviBT1-2

to think that relaxing norms around the way in which particular kinds of information is communicating will not negatively affect the quality of the conversation that unfolds afterwards.

If this happens because someone says something true, relevant, and useful, in a way that doesn't have alternative expressions that are really easy and obvious to do (such as deleting the statement "So and so is a doo-doo head"), then it's the fault of the conversation, not the statement.

doesn't have alternative expressions

The alternative expression, in this particular case (not in the mine run of cases), is not to change the word "insane" (because it seems you are certain enough in your belief that it is applicable here that it makes sense for you to communicate this idea some way), but rather to simply write more (or link to a place that contain arguments which relate, with particularity, to the situation at hand) by explaining why you think it's is true that the statement is "insane".

If you are so confident in your conclusion that you are willing to label the articulation of the opposing view as "insane", then it should be straightforward (and more importantly, should not take so much time that it becomes daunting) to give reasons for that, at the time you make that labeling. 

it should be straightforward (and more importantly, should not take so much time that it becomes daunting) to give reasons for that

NOPE!

I think I'm going to bow out of this conversation right now, since it doesn't seem you want to meaningfully engage.

[-]TsviBT1-1

I'd be open to alternative words for "insane" the way I intended it.

[-]TsviBT-1-1

The comment I was responding to also didn't offer serious relevant arguments.

https://tsvibt.blogspot.com/2023/09/a-hermeneutic-net-for-agency.html

The comment I was responding to also didn't offer serious relevant arguments.

And it didn't label the position it was arguing against as "insane", so this is also entirely off-topic. 

It would be ideal for users to always describe why they have reached the conclusions they have, but that is a fabricated option which does not take into account the basic observation that requiring such explanations creates such a tremendous dis-incentive to commenting that it would drastically reduce the quantity of useful contributions in the community, thus making things worse off than they were before.

So the compromise we reach is one in which users can state their conclusions in a relatively neutral manner that does not poison the discourse that comes afterwards, and then if another user has a question or a disagreement about this matter, later on they can then have a regular, non-Demon Thread discussion about it in which they explain their models and the evidence they had to reach their positions.

I think you are also expressing high emotive confidence in your comments. You are presenting a case, and your expressed confidence slightly lower, but still elevated.

I agree[1], and I think it is entirely appropriate to do so, given that I have given some explanations of the mental models behind my positions on these matters.

For clarity, I'll summarize my conclusion here, on the basis of what I have explained before (1, 2, 3):

  1. It is fine[2] to label opinions you disagree with as "insane".
  2. It is fine to give your conclusions without explaining the reasons behind your positions.[3]
  3. It is not fine to do 1 and 2 at the same time.

With regards to your "taboo off-topic" reaction, what I mean by "off-topic" in this case is "irrelevant to the discussion at hand, by focusing on the wrong level of abstraction (meta-level norms vs object-level discourse) and by attempting to say the other person behaved similarly, which is incorrect as a factual matter (see the distinction between points 2 and 3 above), but more importantly, immaterial to the topic at hand even if true".

  1. ^

    I suspect my regular use of italics is part of what is giving off this impression.

  2. ^

    Although not ideal in most situations, and should be (lightly) discouraged in most spots.

  3. ^

    Although it would be best to be willing to engage in discussion about those reasons later on if other users challenge you on them.

The comment I was responding to also didn't offer serious relevant arguments.

I'm  time-bottlenecked now, but I'll give one example. Consider the Natural Abstraction Hypothesis (NAH) agenda (which, fwiw, I think is an example of considerably-better-than-average work on trying to solve the problem from scratch). I'd argue that even for someone interested in this agenda: 1. most of the relevant work has come (and will keep coming) from outside the LW community (see e.g. The Platonic Representation Hypothesis and compare the literature reviewed there with NAH-related work on LW); 2. (given the previous point) the typical AI safety researcher interested in NAH would do better to spend most of their time (at least at the very beginning) looking at potentially relevant literature outside LW, rather than either trying to start from scratch, or mostly looking at LW literature.

[-]TsviBT1211

considerably-better-than-average work on trying to solve the problem from scratch

It's considerably better than average but is a drop in the bucket and is probably mostly wasted motion. And it's a pretty noncentral example of trying to solve the problem from scratch. I think most people reading this comment just don't even know what that would look like.

even for someone interested in this agenda

At a glance, this comment seems like it might be part of a pretty strong case that [the concrete ML-related implications of NAH] are much better investigated by the ML community compared to LW alignment people. I doubt that the philosophically more interesting aspects of Wentworth's perspectives relating to NAH are better served by looking at ML stuff, compared to trying from scratch or looking at Wentworth's and related LW-ish writing. (I'm unsure about the mathematically interesting aspects; the alternative wouldn't be in the ML community but would be in the mathematical community.)

And most importantly "someone interested in this agenda" is already a somewhat nonsensical or question-begging conditional. You brought up "AI safety research" specifically, and by that term you are morally obliged to mean [the field of study aimed at figuring out how to make cognitive systems that are more capable than humanity and also serve human value]. That pursuit is better served by trying from scratch. (Yes, I still haven't presented an affirmative case. That's because we haven't even communicated about the proposition yet.)

Links have high attrition rate, cf ratio of people overcoming a trivial inconvenience. Post your arguments compressed inline to get more eyeballs on them.

Can you expand on this concisely inline? I agree with the comment you're replying to strongly and think it has been one of miri's biggest weaknesses in the past decade that they didn't build the fortitude to be able to read existing work without becoming confused by its irrelevance. But I also think your and Abram's research direction intuitions seem like some of the most important in the field right now, alongside wentworth. I'd like to understand what it is that has held you back from speed reading external work for hunch seeding for so long. To me, it seems like solving from scratch is best done not from scratch, if that makes sense. Don't defer to what you read.

[-]TsviBT1513

I'd like to understand what it is that has held you back from speed reading external work for hunch seeding for so long.

Well currently I'm not really doing alignment research. My plans / goals / orientation / thinking style have changed over the years, so I've read stuff or tried to read stuff more or less during different periods. When I'm doing my best thinking, yes, I read things for idea seeding / as provocations, but it's only that--I most certainly am not speed reading, the opposite really: read one paragraph, think for an hour and then maybe write stuff. And I'm obviously not reading some random ML paper, jesus christ. Philosophy, metamathematics, theoretical biology, linguistics, psychology, ethology, ... much more interesting and useful.

To me, it seems like solving from scratch is best done not from scratch, if that makes sense.

Absolutely, I 100% agree, IIUC. I also think:

  1. A great majority of the time, when people talk about reading stuff (to "get up to speed", to "see what other people have done on the subject", to "get inspiration", to "become more informed", to "see what approaches/questions there are"...), they are not doing this "from scratch not from scratch" thing.
  2. "the typical EA / rationalist, especially in AI safety research (most often relatively young and junior in terms of research experience / taste)" is absolutely and pretty extremely erring on the side of failing to ever even try to solve the actual problem at all.

Don't defer to what you read.

Yeah, I generally agree (https://tsvibt.blogspot.com/2022/09/dangers-of-deferrence.html), though you probably should defer about some stuff at least provisionally (for example, you should probably try out, for a while, the stance of deferring to well-respected philosophers about what questions are interesting).

I think it's just not appreciated how much people defer to what they read. Specifically, there's a lot of frame deference. This is usually fine and good in lots of contexts (you don't need to, like, question epistemology super hard to become a good engineer, or question whether we should actually be basing our buildings off of liquid material rather than solid material or something). It's catastrophic in AGI alignment, because our frames are bad.

Not sure I answered your question.

I think this is particularly incorrect for alignment, relative to a more typical STEM research field. Alignment is very young[1]. There's a lot less existing work worth reading than in a field like, say, lattice quantum field theory. Due to this, the time investment required to start contributing at the research frontier is very low, relatively speaking.

This is definitely changing. There's a lot more useful work than there was when I started dipping my toe into alignment three years ago. But compared to something like particle physics, it's still very little. 

  1. ^

    In terms of # total smart people hours invested

I'm curious - if you repeated this study, but with "the set of all Ivy League graduates" instead of "the EA/rationalist community", how does it compare? 

Preach, brother.

One hundred twenty percent agreed. Hubris is the downfall of the rationalist project.

Before doing any project or entering any field, you need to catch up on existing intellectual discussion on the subject.

My current take on this topic is to follow this scheme:
1) dedicate some time to think about the problem on your own, without searching the literature
2) look in the literature, compare with your own thoughts
3) get feedback from a human in the field
4) repeat

Do you think (1) makes sense, or is your position extreme enough to reject (1) altogether, or spend only a very short time on it, say < 1 hour?

IMO trying the problem yourself before researching it makes you appreciate what other people have already done even more. It's pretty easy to fall victim to hindsight bias if you haven't experienced the difficulty of actually getting anywhere.

Rationalism loves jargon, including jargon that is just completely unnecessary. For example, the phrase “epistemic status” is a fun technique where you say how confident you are in a post you make. But it could be entirely replaced with the phrase “confidence level”, which means pretty much the exact same thing. 

Jargon is good when it allows us to make distinctions. The phrase “epistemic status” as used in this community does not mean the same thing as  “confidence level”. 

A confidence level boils down to the probability that a given claim is true. It might be phrases in more vague language, but it's about the likelihood that a given thesis is correct.

If I say "Epistemic status: This is written in textbooks of the field" I'm not stating a probability about whether or not my claim is true. I can make the statement without having to be explicit about my confidence in the textbooks of a field. Different readers might have different confidence levels in textbooks of the field I'm talking about. 

If I listen to someone making claims about physics and Bob says A is very likely while Dave says A is certainly false, I get both of their confidence levels. If I additionally learn that the epistemic status of Bob is that he's a physics professor speaking in his field of expertise, while Bob never engaged academically with physics but spent a lot of time thinking about physics independently, I learn something that goes beyond what I got from listening to both of their confidence levels.

This saves everybody a whole lot of time. But unfortunately a lot of articles in the ea/rat community seem to only cite or look at other blog posts in the same community. It has a severe case of “not invented here” syndrome. 

Is is generally true for academia as well, academia generally cites ideas only if those ideas have been expressed by other academics and are frequently even focused on whether they have been expressed in their own discipline.

If you want an example of this dynamic, Nassim Taleb writes in The Black Swan about how what economists call the Black–Scholes formula, is a formula that was known to quants before under another name. Economists still credit Black–Scholes for it, because what traders do is “not invented here”.

That said, of course reading broadly is good.

I use rationalist jargon when I judge that the benefits (of pointing to a particular thing) outweigh the costs (of putting off potential readers). And my opinion is that “epistemic status” doesn’t make the cut.

Basically, I think that if you write an “epistemic status” at the top of a blog post, and then delete the two words “epistemic status” while keeping everything else the same, it works just about as well. See for example the top of this post.

Great post. Self-selection seems huge for online communities, and I think it's no different on these fora.

Confidence level: General vague impressions and assorted thoughts follow; could very well be wrong on some details.

A disagreement I have with both the rationalist and EA communities is what the process of coming to robust conclusions looks like. In those communities, it seems like the strategy is often to identify a few super-geniuses who go do a super-deep analysis, and come to a conclusion that's assumed to be robust and trustworthy. See the "Groupthink" section on this page for specifics.

From my perspective, I would rather see an ordinary-genius do an ordinary-depth analysis, and then have a bunch of other people ask a bunch of hard questions. If the analysis holds up against all those hard questions, then the conclusion can be taken as robust.

Everyone brings their own incentives, intuitions, and knowledge to a problem. If a single person focuses a lot on a problem, they run into diminishing returns regarding the number of angles of attack. It seems more effective to generate a lot of angles of attack by taking the union of everyone's thoughts.

From my perspective, placing a lot of trust in top EA/LW thought leaders ironically makes them less trustworthy, because people stop asking why the emperor has no clothes.

The problem with saying the emporer has no clothes is: Either you show yourself a fool, or else you're attacking a high-status person. Not a good prospect either way, in social terms.

EA/LW communities are an unusual niche with opaque membership norms, and people may want to retain their "insider" status. So they do extra homework before accusing the emperor of nudity, and might just procrastinate indefinitely.

There can also be a subtle aspect of circular reasoning to thought leadership: "we know this person is great because of their insights", but also "we know this insight is great because of the person who said it". (Certain celebrity users on these fora get 50+ positive karma on basically every top-level post. Hard to believe that the authorship isn't coloring the perception of the content.)

A recent illustration of these principles might be the pivot to AI Pause. IIRC, it took a "super-genius" (Katja Grace) writing a super long post before Pause became popular. If an outsider simply said: "So AI is bad, why not make it illegal?" -- I bet they would've been downvoted. And once that's downvoted, no one feels obligated to reply. (Note, also -- I don't believe there was much reasoning transparency regarding why the pause strategy was considered unpromising at the time. You kinda had to be an insider like Katja to know the reasoning in order to critique it.)

In conclusion, I suspect there are a fair number of mistaken community beliefs which survive because (1) no "super-genius" has yet written a super-long post about them, and (2) poking around by asking hard questions is disincentivized.

From my perspective, I would rather see an ordinary-genius do an ordinary-depth analysis, and then have a bunch of other people ask a bunch of hard questions. If the analysis holds up against all those hard questions, then the conclusion can be taken as robust.

On LessWrong, there's a comment section where hard questions can be asked and are asked frequently. The same is true on ACX.

On the other hand, GiveWell recommendations don't allow raising hard questions in the same way and most of the grant decisions are made behind closed doors.

A recent illustration of these principles might be the pivot to AI Pause. [...] I don't believe there was much reasoning transparency regarding why the pause strategy was considered unpromising at the time. 

I don't think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics. Everything that's written publically can be easily picked up by journalists wanting to write stories about AI.

I think you can argue that more reasoning transparency around AI policy would be good, but it's not something that generalizes over other topics on LessWrong.

On LessWrong, there's a comment section where hard questions can be asked and are asked frequently.

In my experience, asking hard questions here is quite socially unrewarding. I could probably think of a dozen or so cases where I think the LW consensus "emperor" has no clothes, that I haven't posted about, just because I expect it to be an exercise in frustration. I think I will probably quit posting here soon.

I don't think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics.

In terms of advocacy methods, sure. In terms of desired policies, I generally disagree.

Everything that's written publically can be easily picked up by journalists wanting to write stories about AI.

If that's what we are worried about, there is plenty of low-hanging fruit in terms of e.g. not tweeting wildly provocative stuff for no reason. (You can ask for examples, but be warned, sharing them might increase the probability that a journalist writes about them!)

Noting that I'm upvoting, but mostly for the "How Big is EA and rationalism" section. I've had "get a good order of magnitude estimate for the community" on my backlog for a while and never got it into a place that felt publishable. I'm glad someone got to it!

median rationalist at roughly MENSA level. This still feels wrong to me: if they’re so smart, where are the nobel laureates? The famous physicists? And why does arguing on Lesswrong make me feel like banging my head against the wall?

I think you'd have to consider both Scott Aaronson and Taylor Cowen to be rationalist adjacent, and both considered intellectual heavyweights

Dustin Moskovitz EA adjacent, again considered a heavyweight, but applied to business rather than academia

Then there's the second point, but unfortunately I haven't seen any evidence that someone being smart makes them pleasant to argue with (the contrary in fact)

Emmett Shear might also count, but he might merely be rationalist-adjacent.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2025. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?