This is much much better than the draft version. In particular, I no longer have the same impression from my draft feedback, that it read like "Here's how you can audition for a spot in our prestigious club".
So kudos for listening to feedback <3, and apologies for my exhausting style of ultra-detailed feedback.
Anyway, you made the mistake (?) of asking for more feedback, so I have more of it T_T. I've split it into three separate comments: typos, language, and substantial feedback.
Could the CFAR handbook or Tuning your cognitive strategies be put in the foundational reading section, alongside the Sequences and Codex and HPMOR?
Cognitive tuning isn't very foundational, and possibly not even safe (although people worried about the safety seem to be mistaken). But if enough people try it, then it has significant potential to become its own entire field of successful human intelligence augmentation. AFAIK it offers a more nuanced approach to intelligence-augmenting habit formation than anything I've seen from any other source.
The CFAR handbook is good stuff that gets at important aspects of rationality, but I don't think it counts either as something that core LessWrong userbase has mostly read, or is nearly as much the stuff that gets used regularly in conversations here. Among other things, the PDF of it wasn't generally available until 2020, and a nicely formatted sequence until a year ago.
A bunch of the intro feels quite molochpilled to me. eg "stay true to our values" and the entire "systemetized winning" that we still seem to bring up here (concerning in the sense of implying conflict games). Since the negative interpretations aren't the intended ones, I suspect that we're a low edit distance from avoiding the implication. Unfortunately, it's late and I post this without any fixes in mind; just thought I'd express the viewpoint.
Sorry to have missed this while it was in draft form!
short answer: apparently I'm not sure how to clarify it.
Before this change, which I feel fixes the main issue I was worried about:
rationalists should win [at life, their goals, etc]
it sounded to a large subset of my predictor of how my friends would react if I shared this to invite them to participate here, that I should predict that they would read it as "win at the zero sum game of life". this still has some ambiguity in that direction; by not clearly implying that life isn't zero sum, an implication that a certain kind of friend is worried anyone who thinks themselves smarter or more rational than others is likely to also believe, that sort of easily spooked friend will be turned away by this phrasing. I don't say this to claim this friend is correct; I say this because I want to invite more of this sort of friend to participate here. I also recognize that accommodating the large number of easily spooked humans out there can be a chore, which is why I phrase the criticism by describing in detail how the critique is based on a prediction of those who won't comment about it. Those who do believe life is zero sum, and those who spend their day angry at the previous group who believe life is zero sum, should, in my opinion, both be able to read this and get excited that this rational viewpoint has a shot at improving on their own viewpoint; the conflict between these types of friend should be visibly "third door"ed here. To do this needs a subtlety that I write out this long meta paragraph because I am actually not really sure how to manage; a subtlety that I am failing to encode. So I just want to write out a more detailed overview of my meta take and let it sit here. Perhaps this is because the post is already at the pareto frontier of what my level of intelligence and rationality can achieve, and this feedback is therefore nearly useless!
In other words: nothing actually specifically endorses moloch. But there's a specific kind of vibe that is common around here, which I think a good intro should help onramp people into understanding, and which presently is an easier vibe to get started with for the type of friend who believes life is zero sum and would like to win against others.
Btw, I unvoted my starting comment, based on a hunch about how I'd like comments to be ordered here.
The question of whether truth-seeking (epistemic) rationality is actually the same.as.winning (instrumental) rationality has never been settled. In the interests of epistemic rationality, it might have been better to phrase this as "we are interested in seeking both truth and usefulness".
Some of that changed from the last draft. I just made a change to clarify in the case of "winning" since that seemed easy.
it's a lot more productive for everyone involved if you're able to respond to or build upon the existing arguments, e.g. showing why you think they're wrong
Good opportunity to say "showing why they're wrong" instead (without "you think"), to avoid connotation of "it's just your opinion" rather than possibility of actually correct bug reports.
A more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process.
It's not clear from this or what immediately follows in this section whether you intend this statement as a tautological definition of a process (a process that "tends to arrive at true beliefs and good decisions more often" is what we call a "more rational reasoning process") or as an empirically verifiable prediction about a yet-to-be-defined process (if you use a TBD "more rational reasoning process" then you will "tend[] to arrive at true beliefs and good decisions more often"). I could see people drawing either conclusion from what's said in this section.
Since you've gone with the definition, are you sure that definition is solid? A reasoning process like "spend your waking moments deriving mathematical truths using rigorous methods; leave all practical matters to curated recipes and outside experts" may tend to arrive at true beliefs and good decisions more often than "attempt to wrestle as rationally as you can with all of the strange and uncertain reality you encounter, and learn to navigate toward worthy goals by pushing the limits of your competence in ways that seem most promising and prudent" but the latter seems to me a "more rational reasoning process."
The conflation of rationality with utility-accumulation/winning also strikes me as questionable. These seem to me to be different things that sometimes cooperate but that might also be expected to go their separate ways on occasion. (This, unless you define winning/utility in terms of alignment with what is true, but a phrase like "sitting atop a pile of utility" doesn't suggest that to me.)
If you thought you were a shoe-in to win the lottery, and in fact you do win, does that retrospectively convert your decision to buy a lottery ticket into a rational one in addition to being a fortunate one? (Your belief turned out to be true, your decision turned out to be good, you got a pile of utility and can call yourself a winner.)
A thing I should likely include is something like the definition gets disputed, but what I present is the most standard one.
Typo feedback:
If you arrived here out of interested in AI
"out of interest"
LessWrong is online forum/community
"is an online forum and community"
a reasoning process that responds to evidence is more likely to believe true things than one that just goes with what's convenient to believe."
"more likely to lead to true beliefs" (a reasoning process doesn't believe anything)
a) The original article is capitalized as "Rationality is Systematized Winning"
b) After this line in the essay, there's an empty line inside the quote which can be removed.
For consistency, the dash here should be an em-dash: –
LessWrong is a good place for:
In all the following list of bullet points, the grammar doesn't work.
a) Currently they read as "LessWrong is a good place for who wants to work collaboratively" etc., so obviously a word like "someone" or "people" is missing. And the entire structure might work better if it was instead phrased as "LessWrong is a good place for people who..." or "LessWrong is a good place for you if you", with each bullet point beginning with "... <verb>".
b) The sentences also currently mix up two ways of address, namely "someone who" and "you". E.g. look at this sentence: "who likes acknowledging... to your reasoning"
We, the site moderators, don't take for granted that what makes our community special won't stay that way without intentional effort.
I'm not entirely sure, but I think the "won't" here might be a wrong negation. How about something like the following:
"We, the site moderators, don't take for granted what makes our community special, and that preserving it will require intentional effort."
– paraphrased and translated chatlog (from german)
"German"
These give LessWrong a pretty distinct style from the rest of Internet.
"of the Internet"
Rather than say that is "extremely unlikely", we'd say "I think there's a 1% chance or lower of it happening".
"Rather than say that X is... that X happens."
that seem to make conversation worse
"conversations"
these are not official LessWrong site guidelines, but suggestive of the culture around here:
"These"
for the people who'd liked that writing and wanted to have discussion inspired by the ways of thinking he described and demonstrated
"wanted to have discussions"
"he'd described"
Ways to get started
"started:"
Also, some of the bullet points immediately after this are in past tense for some reason.
Rationality: A-Z was an edited and distilled version compiled in 2015 of ~400 posts.
"consisting of ~400 posts"
Highlights from the Sequences is 50 top posts from the Sequences. They're a good place to start.
"consists of 50 top posts"
this is just a heads up
heads-up
LessWrong is also integrated with the Alignment Forum
"Forum."
doing so ensures you'll write something well received
"well-received"
The full Sequences is pretty long
"are pretty long"
and see what the style is on LessWrong
"and see what the style is on LessWrong."
If you have questions about the site, here are few places you can get answers:
"here are a few places where"
many more people are flowing to LessWrong because we have discussion of it
I find the current phrasing a bit weird. Maybe "because we host discussions of it"?
It's possible to want to see more of something (e.g. interesting arguments) even if you disagree with them
", even if you disagree with it"
it's okay if your first submission or several don't meet the bar, we'll give you feedback on what to change if something's not good
All other bullet points here are phrased as full sentences with a period at the end.
Rules to be aware of
All bullet points following this are missing periods at the end.
[Aspiring] rationalists should win [at life, their goals, etc]. You know a rationalist because because they're sitting atop a pile of utility. – Rationality is systematized winning
"because because" should probably be "because"
We, the site moderators, don't take for granted that what makes our community special won't stay that way without intentional effort.
"won't stay that way" should probably be "would stay that way"
What LessWrong is about: "Rationality"
I don't know how to phrase the question but, basically, "what does that mean"?
Assume a new user comes to LW, reads the New User's Guide to LessWrong first, then starts browsing the latest posts/recommandations, they will quickly notice that, in practice, LW is mostly about AI or, at least, most posts are about AI, and this has been the case for a while already.
And that is despite the positive karma bias towards Rationality and World modeling by default, which I assume is an effort from you (the LW team) to make LW about rationality, and not about AI (I appreciate the effort).
So, the sentence "What LW is about: "Rationality" ", is it meant to describe the website, in which case it seems like a fairly inaccurate description ; is it meant to be a promise made to new users, that is "we know that, right now, discussions are focused on AI, but we, the LW team, know that they will come back to rationality / are commited to make them come back to rationality"?
I don't want to criticize the actions of the LW team, I understand that your are aware of this situation, and that there might not exist a better equilibrium between wanting LW to be about rationality, not wanting to shut down AI discussions because they have some value, not wanting to prevent users from posting about anything (including AI) as long as some quality standards are met. Still, I am worried about the gap a new user would observe between the description of LW written here, and what they will find on the site.
A few points.
Good points
Thank you
You may be right regarding what new users care about - usually one registers on a site to comment on a discussion, for example -, but the problem is that from that perspective, LW is definitely about AI, no matter what the New User's Guide or the mods or the long-term users say. After all, AI-related news is the primary reason behind the increased influx of new users to LW, so those users are presumably here for AI content.
One way in which the guide and mod team try to counteract that impression is by showing new users curated stuff from the archives, but it might also be warranted to further deemphasize the feed.
I'm a new member here and curious about the site's view on responding to really old threads. My first comment was on a post that turned out to be four years old. It was a post by Wei Dai and appeared at the top of the page today, so I assumed it was new. I found the content to be relevant, but I'd like to know if there is a shared notion of "don't reply to posts that are more than X amount in the past."
I love getting comments on old posts! (There would be less reason to write if all writing were doomed to be ephemera; the reverse-chronological format of blogs shouldn't be a straitjacket or death sentence for ideas.)
Absolutely. I've just gotten a 30-day trial for Matt Yglesias' SlowBoring substack, and figured I'd look through the archives... But then I immediately realized that Substack, just like reddit etc., practically doesn't care about preserving, curating or resurfacing old content. Gwern has a point here on internet communities prioritizing content on different timescales by design, and in that context, LessWrong's attempts to preserve old content are extremely rare.
I'm very confident that there is no norm of pushing people away from posting on old threads. I'm generally confident that most people appreciate comments on old posts. However, I think it is also true that comments on old posts are unlikely to be seen, voted on, or responded to.
I agree that if at all there is a counternorm to that, and also with the observation that such comments are often (sadly) ignored.
It's totally normal to comment on old posts. We deliberate design the forum to make it easier to do and for people to see that you have.
(actually your comment here makes me realize we should probably somehow indicate when there are new comments on the top-of-the-page spotlight post, so people can more easily see and continue the convo)
So does LessWrong, but they quickly disappear (because there's a high volume of comments). GreaterWrong doesn't have Spotlight Items so the point is a bit moot, but the idea here is that everyone is nudged more to see new comments on the current Spotlight Item on LessWrong.
(i.e. this thing at the top:
)
Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
He usually descibes himself as a decision theorist if asked for a description of his job.
Some typos:
rationality lessons we've accumulated and made part of our to our thinking
Seems like some duplicated words here.
weird idea like AIs being power and dangerous in the nearish future.
Perhaps: "weird ideas like AIs being powerful and dangerous"
We, the site moderators, don't take for granted that what makes our community special won't stay that way without intentional effort.
The double negative here distorts the meaning of this sentence.
Thanks @David Gross for the many suggestions and fixes! Much appreciated. Clearly should have gotten this more carefully proofread before posting.
All the typo comments are great, but the resolved typos are mixed in with open feedback. Is it possible to hide those or bundle them together, somehow, so they don't clutter the comments here?
I also frequently make typo comments, and this problem is why I've begun neutral-voting my own typo comments, so they start on 0 karma. If others upvote them, the problem is that the upvote is meant to say "thanks for reporting this problem", but it also means "I think more people should see this". And once the typo is fixed, the comment is suddenly pointless, but still being promoted to others to see.
Alternatively, I think a site norm would be good where post authors are allowed and encouraged to just delete resolved typo comments and threads. I don't know, however, if that would also delete the karma points the user has gained via reporting the typos. And it might feel discouraging for the typo reporters, knowing that their contribution is suddenly "erased" as if it had never happened.
A technical alternative would be an archival feature, where you or a post author can mark a comment as archived to indicate that it's no longer relevant. Once archived, a comment is either moved to some separate comments tab, or auto-collapsed and sorted below all other comments, or something.
Although encouraged, you don't have to read this to get started on LessWrong!
This is grammatically ambiguous. The "encouraged" shows up out of nowhere without much indication of who is doing the encouraging or what they are encouraging. ("Although [something is] encouraged [to someone by someone], you don't have to read this...")
Maybe "I encourage you to read this before getting started on LessWrong, but you do not have to!" or "You don't have to read this before you get started on LessWrong, but I encourage you to do so!"
I realized something important about psychology that is not yet publicly available, or that is very little known compared to its importance (60%). I don't want to publish this as a regular post, because it may greatly help in the development of GAI (40% that it helps and 15% that it's greatly helps), and I would like to help only those who are trying to create an alligned GAI. What should I do?
I'd ask in the Open Thread rather than here. I don't know of a canonical answer but would be good if someone wrote one.
Hey, I wonder what's your policy on linking blog posts? I have some texts that might be interesting to this community, but I don't really feel like copying everything from HTML here and duplicating the content. At the same time I know that some communities don't like people promoting their content. What are the best practices here?
Typo: "If you arrived here out of interested in AI" instead of "If you arrived here out of interest in AI".
and simple to express:
Err
and err
and err again
but less
and less
and less.
– Piet Hein
Why a new user guide?
Although encouraged, you don't have to read this to get started on LessWrong!
LessWrong is a pretty particular place. We strive to maintain a culture that's uncommon for web forums[1] and to stay true to our values. Recently, many more people have been finding their way here, so I (lead admin and moderator) put together this intro to what we're about.
My hope is that if LessWrong resonates with your values and interests, this guide will help you become a valued member of community. And if LessWrong isn't the place for you, this guide will help you have a good "visit" or simply seek other pastures.
Contents of this page/email
If you arrived here out of interest in AI, make sure to read the section on LessWrong and Artificial Intelligence.
What LessWrong is about: "Rationality"
LessWrong is online forum/community that was founded with the purpose of perfecting the art of human[2] rationality.
While truthfulness is a property of beliefs, rationality is a property of reasoning processes. Our definition[3] of rationality is that a more rational reasoning process tends to arrive at true beliefs and good decisions more often than a less rational process. For example, a reasoning process that responds to evidence is more likely to believe true things than one that just goes with what's convenient to believe. An aspiring rationalist[4] is someone who aspires to improve their own reasoning process to arrive at truth more often.
On LessWrong we attempt (though don't always succeed) to apply the rationality lessons we've accumulated to any topic that interests us, and especially topics that seem important, like how to make the world a better place. We don't just care about truth in the abstract, but care about having true beliefs about things we care about so that we can make better and more successful decisions.
Right now, AI seems like one of the most (or the most) important topics for humanity. It involves many tricky questions, high stakes, and uncertainty in an unprecedented situation. On LessWrong, many users are attempting to apply their best thinking to ensure that the advent of increasingly powerful AI goes well for humanity.[5]
Is LessWrong for you?
LessWrong is a good place for someone who:
If many of these apply to you, then LessWrong might be the place for you.
LessWrong has been getting more attention (e.g. we get linked in major news articles somewhat regularly these days), and so have many more people showing up on the site. We, the site moderators, don't take for granted that what makes our community special won't stay that way without intentional effort, so we are putting more effort into tending to our well-kept garden.
If you're on board with our program and will help make our community more successful at its goals, then welcome!
Okay, what are some examples of what makes LessWrong different?
The LessWrong community shares a culture that encodes a bunch of built up beliefs, opinions, concepts, and values about how to reason better. These give LessWrong a pretty distinct style from the rest of Internet.
Some of the features that set LessWrong apart:
Philosophical Heritage: The Sequences
Between 2006 and 2009, Eliezer Yudkowsky spent two years writing a sequence of blog posts that shared his philosophy/beliefs/models about rationality[7]; collectively those blog posts are called The Sequences. In 2009, Eliezer founded LessWrong as a community forum for the people who'd liked that writing and wanted to have discussion inspired by the ways of thinking he described and demonstrated.
If you go to a math conference, people will assume familiarity with calculus; the literature club likely expects you've read a few Shakespeare plays; the baseball enthusiasts club assumes knowledge of the standard rules. On LessWrong people expect knowledge of concepts like Conservation of Expected Evidence and Making Beliefs Pay Rent and Adaptation-Executers, not Fitness-Maximizers.
Not all the most commonly referenced ideas come from The Sequences, but enough of them do that we strongly encourage people to read The Sequences. Ways to get started
Much of the spirit of LessWrong can also be gleaned from Harry Potter and the Methods of Rationality (a fanfic by the same author as The Sequences). Many people found their way to LessWrong via reading it.
Don't worry! You don't have to know every idea ever discussed on LessWrong to get started, this is just a heads up on the kind of place this is.
Topics other than Rationality
We are interested in rationality not for the sake of rationality alone, but because we care about lots of other things too. LessWrong has rationality as a central focus, but site members are interested in discussing an extremely wide range of topics, albeit using our rationality toolbox/worldview.
Artificial Intelligence
If you found your way to LessWrong recently, it might be because of your interest in AI. For several reasons, the LessWrong community has strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
Even if you found your way to LessWrong because of your interest in AI, it's important for you to be aware of the site's focus on rationality, as this shapes expectations we have of all users in their posting, commenting, etc.
How to get started
Because LessWrong is a pretty unusual place, it's usually a good idea for users to have spent some time on the site before writing their own posts or getting deep into comment discussions – doing so ensures you'll write something well received.
Here's the reading we recommend:
Foundational reading
LessWrong grew from the people who read Eliezer Yudkowsky's writing on a shared blog overcomingbias.com and then migrated to a newfound community blog in 2009. To better understand the culture and shared assumptions on LessWrong, read The Sequences.
The full Sequences is pretty long, so we also have The Sequences Highlights for an initial taste. The Codex, a collection of writing by Scott Alexander (author of Slate Star Codex/Astral Codex Ten) is also a good place to start, as is Harry Potter and the Methods of Rationality.
Exploring your interests
The Concepts Page shows a very long list of topics on which LessWrong has posts. You can use that page to find posts that cover topics interesting to you, and see what the style is on LessWrong
Participate in welcome threads
The monthly general Open and Welcome thread is a good place to introduce yourself and ask questions, e.g. requesting reading recommendations or floating your post ideas. There are frequently new "all questions welcome" AI Open Threads if that's what you'd like to discuss.
Attend a local meetup
There are local LessWrong (and SSC/ACX) meetups in cities around the world. Find one (or register for notifications) on our event page.
Helpful Tips
If you have questions about the site, here are few places you can get answers:
How to ensure your first post or comment is well-received
This is a hard section to write. The new users who need to read it least are more likely to spend time worrying about the below, and those who need it most are likely to ignore it. Don't stress too hard. If you submit it and we don't like it, we'll give you some feedback.
A lot of the below is written for the people who aren't putting in much effort at all, so we can at least say "hey, we did give you a heads up in multiple places".
There are a number of dimensions upon which content submissions may be strong or weak. Strength in one place can compensate for weakness in another, but overall the moderators assess each first post/comment from new users for the following. If the first submission is lacking, it might be rejected and you'll get feedback on why.
Your first post or comment is more likely to approved by moderators (and upvoted by general site users) if you:
Demonstrate understanding of LessWrong rationality fundamentals. Or at least don't do anything contravened by them. These are the kinds of things covered in The Sequences such as probabilistic reasoning, proper use of beliefs, being curious about where you might be wrong, avoiding arguing over definitions, etc. See the Foundational Reading section above.
Write a clear introduction. If your first submission is lengthy, i.e. a long post, it's more likely to get quickly approved if the site moderators can quickly understand what you're trying to say rather than having to delve deep into your post to figure it out. Once you're established on the site and people know that you have good things to say, you can pull off having a "literary" opening that doesn't start with the main point.
Address existing arguments on the topic (if applicable). Many topics have been discussed at length already on LessWrong, or have an answer strongly implied by core content on the site, e.g. from the Sequences (which has rather large relevance to AI questions). Your submission is more likely to be accepted if it's clear you're aware of prior relevant discussion and are building upon on it. It's not a big deal if you weren't aware, there's just a chance the moderator team will reject your submission and point you to relevant material.
This doesn't mean that you can't question positions commonly held on LessWrong, just that it's a lot more productive for everyone involved if you're able to respond to or build upon the existing arguments, e.g. showing why they're wrong.
Address the LessWrong audience. A recent trend is more and more people crossposting from their personal blogs, e.g. their Substack or Medium, to LessWrong. There's nothing inherently wrong with that (we welcome good content!) but many of these posts neither strike us as particularly interesting or insightful, nor demonstrate an interest in LessWrong's culture/norms or audience (as revealed by a very different style and not really responding to anyone on site).
It's good (though not absolutely necessary) when a post is written for the LessWrong audience and shows that by referencing other discussions on LessWrong (links to other posts are good).
Aim for a high standard if you're contributing on the topic AI. As AI becomes higher and higher profile in the world, many more people are flowing to LessWrong because we have discussion of it. In order to not lose what makes our site uniquely capable of making good intellectual progress, we have particularly high standards for new users showing up to talk about AI. If we don't think your AI-related contribution is particularly valuable and it's not clear you've tried to understand the site's culture or values, then it's possible we'll reject it.
Don't worry about it too hard.
It's ok if we don't like your first submission, we will give you feedback. In many ways, the bar isn't that high. As I wrote above, this document is so not being approved on your first submission doesn't come as a surprise. If you're writing a comment and not a 5,000 word post, don't stress about it.
If you do want to write something longer, there is a much lower bar for open threads, e.g. the general one or AI one. That's a good place to say "I have an idea about X, does LessWrong have anything on that already?"
In conclusion, welcome!
And that's it, hopefully this intro sets you up for good reading and good engagement with LessWrong!
Appendices
The Voting System
The voting or "karma" system is pretty integral to how LessWrong promotes (or hides) content. The standard advice for how to vote is: upvote if you want to see more of something, downvote if you want to see less.
Strong Votes and Vote Strength
LessWrong has strong votes too, for when you feel something particularly strongly. Different users have different vote strengths based on how many upvotes/downvotes they've received.
Two-Axis System
It's possible to want to see more of something (e.g. interesting arguments) even if you disagree with them, or to think an argument is weak even though it's for a conclusion you agree with. LessWrong makes it possible to express to see more/less of something separately from whether you agree/disagree with it. (Currently only comments.) This means that upvotes and downvotes on the main axis can be used to express judgments of quality separate from agreement. But the same spirit applies to posts too.
LessWrong moderator's toolkit
The LessWrong mod team like to be transparent about our moderation process. We take tending the garden seriously, and are continuously improving our tools for maintaining a well-kept site. Here are some of our tools and processes.
Initial user/content review
Moderator actions
When there's stuff that seems to make the site worse, in order of severity, we'll apply the following:
Rules to be aware of
I won't claim that we're entirely unique, but I don't think our site is typical of the internet.
Some people pointed out to me that other Internet communities also aim more in the direction of collaborative and truth-seeking discourse such as Reddit's ELI5 or Change My View; adjacent communities like Astral Codex Ten; and discourse in technical communities like engineers or academics; etc.
We say "human" rationality, because we're most interested in how us humans can perform best given how our brains work (as opposed to the general rationality that'd apply to AIs and aliens too).
The definition of "rationality" on LessWrong isn't 100% universally agreed to, though this one is the most standard.
This is ideally what we'd call ourselves all the time, but since it's a bit of a mouthful, people tend to just say rationalist without qualification. Nonetheless, we do not claim that we've definitely attained that much rationality. But we're aiming to.
In fact, one of Eliezer Yudkowsky's (founder of LessWrong) ulterior motives for founding LessWrong in 2009 was that rationality would help people think about AI. Back in 2009, it took more perception and willingness to discern the truth of weird ideas like AIs being powerful and dangerous in the nearish future.
As opposed to beliefs being for signaling group affiliation and having pleasant feelings.
In a 2014 comment, Eliezer described the Sequences as containing 60% standard positions, 25% ideas you could find elsewhere with some hard looking, and 15% original ideas. He says that the non-boring tone might have fooled people into thinking more is in original than there is, but also that the curation of which things he included and how they fit together into a single package was also originality.