The road to wisdom? Well, it's plain
and simple to express:

Err
and err
and err again
but less
and less
and less.

– Piet Hein

LessWrong is an online forum and community dedicated to improving human reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. Each day, we aim to be less wrong about the world than the day before.

See also our New User's Guide.

Training Rationality

Rationality has a number of definitions[1] on LessWrong, but perhaps the most canonical is that the more rational you are, the more likely your reasoning leads you to have accurate beliefs, and by extension, allows you to make decisions that most effectively advance your goals.

LessWrong contains a lot of content on this topic. How minds work (both human, artificial, and theoretical ideal), how to reason better, and how to have discussions that are productive. We're very big fans of Bayes Theorem and other theories of normatively correct reasoning[2].

To get started improving your Rationality, we recommend reading the background-knowledge text of LessWrong, Rationality: A-Z (aka "The Sequences") or at least selected highlights from it. After that, looking through the Rationality section of the Concepts Portal is a good thing to do.

Applying Rationality

You might value Rationality for its own sake, however, many people want to be better reasoners so they can have more accurate beliefs about topics they care about, and make better decisions.

Using LessWrong-style reasoning, contributors to LessWrong have written essays on an immense variety of topics on LessWrong, each time approaching the topic with a desire to know what's actually true (not just what's convenient or pleasant to believe), being deliberate about processing the evidence, and avoiding common pitfalls of human reason.

Check out the Concepts Portal to find essays on topics such as artificial intelligence, history, philosophy of science, language, psychology, biology, morality, culture, self-care, economics, game theory, productivity, art, nutrition, relationships and hundreds of other topics broad and narrow.

LessWrong and Artificial Intelligence

For several reasons, LessWrong is a website and community with a strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.

  • AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.
  • Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
  • Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/AI Safety, and that's only accelerated recently with further AI capabilities developments.
    • LessWrong is also integrated with the Alignment Forum
    • The LessWrong team who maintain and develop the site are predominantly motivated by trying to cause powerful AI outcomes to be good.

If you want to see more or less AI content, you can adjust your Frontpage Tag Filters according to taste[3].

Getting Started on LessWrong

The New User's Guide is a great place to start.

The core background text of LessWrong is the collection of essays, Rationality: A-Z (aka "The Sequences"). Reading these will help you understand the mindset and philosophy that defines the site. Those looking for a quick introduction can start with The Sequences Highlights

Other top writings include The Codex (writings by Scott Alexander) and Harry Potter & The Methods of Rationality. Also see the Library Page for many curated collections of posts and the Concepts Portal.

Also, feel free to introduce yourself in the monthly open and welcome thread!

Lastly, we do recommend that new contributors (posters or commenters) take time to familiarize themselves with the sites norms and culture to maximize the chances that your contributions are well-received.

Thanks for your interest!

- The LW Team

 

 

  1. ^

    Definitions of Rationality as used on LessWrong include:

    - Rationality is thinking in ways that systematically arrive at truth.

    - Rationality is thinking in ways that cause you to systematically achieve your goals.

    - Rationality is trying to do better on purpose.

    - Rationality is reasoning well even in the face of massive uncertainty.

    - Rationality is making good decisions even when it’s hard.

    -Rationality is being self-aware, understanding how your own mind works, and applying this knowledge to thinking better.

  2. ^

    There are in fact laws of thought no less ironclad than the law of physics [source].

  3. ^

    Hover your mouse over the tags to be able to adjust their weighting in your Latest Posts feed.

New Comment
Rendering 68/74 comments, sorted by (show more) Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I just stumbled upon lesswrong.com while searching for information on Zettelkasten and I must say this site is STUNNING! This is some of the most beautiful typography I've seen, anywhere! The attention to detail is exquisite! I haven't even gotten to your content yet! This will probably remain a permanently open tab in my browser... it's a work of art!

[-]gwern*160

If you're interested in LW2's typography, you should take a look at GreaterWrong, which offers a different and much more old-school non-JS take on LW2, with a number of features like customizable CSS themes. (Available builtin themes include a 'LW1' theme, a 'LW2' theme, and a 'RTS' theme.) There is a second project, Read The Sequences.com (RTS), which focuses on a pure non-interactive typography-heavy presentation of a set of highly-influential LW1 posts. Finally, there's been cross-pollination between LW2/GW/RTS and my own website (description of design).

6Said Achmiz
Thanks to gwern for the mention of GW/RTS! In the interests of giving equal screen time to the (friendly!) ‘competition’, here’s yet another viewer site for Less Wrong—one which takes an even more low-key and minimalist approach: https://lw2.issarice.com/
2Martin Vlach
Shows only blank white page RN. Mind to update/delete it?
2Said Achmiz
It’s not my website, so that question isn’t really for me, sorry.
1Martin Vlach
Oh, good, I've contacted the owner and they responded it was necessary to get their IP address whitelisted by LW operators. That should resolve soon.
1markkrieg
W-o-W!!! Thanks so much for these links!
3Conor
Could you expand on what makes the typography noteworthy? I'm completely unaware of this topic, but curious.
6Demeter
Good question. I will try to explain why the typography is noteworthy, rather than the mechanics of making it so. First, the small sans-serif font here is exceptionally readable. That isn't easy. Site-specific browser magnification is typically necessary on other websites. Next, there is the range of choice offered within the user interface for comments. Having a choice of LaTeX, markdown, rich text (as well as built in features such as footnotes) for posts would be unusual, yet LW offers it for comments as well! Finally, please see gwern's examples for LW2 linked above. I find GreaterWrong challenging to read, and confusing to navigate. Not for me, but maybe for thee! ReadTheSequences uses serif fonts but has traditional typographical elements that give it elegance, yet is still spaced and kerned such that it is easily readable. The more elegant typeface is used sparingly, for important LW1 posts which is part of good typography too. Hope that helps.

Thank you so much. This website is amazing.

Found this site when I was a kid (hi HPMOR) & realized it wasn't all a fever dream when I got onto X a decade later! Really excited to read through posts, learn new things, and hopefully build a thinking-deeply-through-writing habit myself.

3habryka
Welcome! Hope you have a good time here.
1halinaeth
Thank you! So much to explore :))

Hi all! I found my way here through hpmor, and am intrigued and a little overwhelmed by the amount of content. Where do I begin? The sequences? Latest featured posts? Is anything considered out of date at this point?

The sequences are still the place I would start. if you bounce off of that for any reason, I would start reading the content in the Codex, and then maybe give all the historical curated posts a shot. You might also want to try reading the essays that were voted as the best of 2018.

1aemaeth
I will do just that. Thank you.

Hi there! My name is Abby. I am very new to the world of A.I. 

Thanks for creating a place for me to come and have conversations with people that know much more than me. Because I have been by myself geeking out over Llama 3.1 as someone who started using it very passively to create copy for managing social media. BUT that was not what made me start becoming nearly obsessed with A.I. right now.

I have been working on a non-fiction book. And thought, hmmm let me just see what responses I get from Llama 3.1. My mind was blown. In fact, it was Llama 3.1 wh... (read more)

2Mitchell_Porter
There is a philosophy of "cyborgism" which emphasizes symbiosis...
[-]pom80

Hi, I am new here, I found this website by questioning ChatGPT about places on the internet where it would be possible to discuss and share information in a more civilized way than seems to be customary on the internet. I have read (some of) the suggested material, and some other bits here and there, so I have a general idea of what to expect. My first attempt at writing here was rejected as spam somehow, so I'll try again without making a slightly drawn out joke. So this is the second attempt, first post. Maybe.  

2KvmanThinking
hi :) what was your first attempt at writing? i might be able to tell you why it was rejected

I came across this site by chance thanks to a friend of mine. I'm a bit confused as to where to start? Maybe I will ask my friend again.

6Ruby
Check out the starting guide in the FAQ!
1tslarm
Maybe here: https://www.lesswrong.com/rationality

Oh wow, im glad i found this site in 2022. I was googling about recording every thought i have lol

Can't believe I didnt find this page before. Awesome content and a killer UI/UX - simply love it! Can't wait to explore more.

[-][anonymous]30

Howdy. I notice there is an old welcome page where new members of the community would introduce themselves. But that page appears to have last been posted to a year ago, and the last one before that was three years ago. Also, the comments page appears to be dominated by a discussion over whether a particular member is a troll, or not.  Also, that page is not linked to here. So I gather that page is no longer the place for introductions -- is this right? Is there somewhere else that now serves that function? I'd like to get a sense of the other human beings out there.

4mingyuan
People now introduce themselves in the monthly Open and Welcome threads :)
2Ruby
What mingyuan said!

The last paragraph, small omission, says 'under' should be 'understand'. Sorry.

2Ruby
Fixed! Thank you!

First question is about the "Verification code" that was just sent to my already validated (6 years ago) email address. It might even be urgent? Is there some penalty if I ignore the code now that I'm apparently already logged in? (No mention of "verification" in the FAQ. I know that I did not manually enter the verification code anywhere, but the website somehow decided I was logged in anyway.)

I visited this website at least one time (6 years ago) and left a message. Then I forgot about LW until the book The AI Does Not Hate You reminded me.

My next questi... (read more)

2Ruby
Welcome back! I'm not sure what happened with the verification email, but if you're here, you're here. Regards to dimensions, we've though about this but it's tricky and competes with all the other things we do, but is an entirely fair question. If you find somewhere you think is better, please let us know!
1shanen
Thank you for your reply. I'm pretty sure you meant "thought" rather than something like "been through this [before]". [And later I got detoured into the Chat help and had some trouble recovering to this draft...] As regards your closing, I believe the trite reply is "No fair! I asked you first." ;-) [I recently read The Semiotics of Emoji and would insert a humorous one if it were available.[But in chat it appeared to convert the one I just used. Here?]]  I am considering submitting a new question, either for this question or for your other reply (which might relate to a long comment I wrote on karma (but I can't see the full context from here) or about LW's financial model (in the context of how it influences discussions on LW). With regards to this question, I can already say that LW seems to be solidly implemented and matches the features of any discussion website that I know of. Not the same, but at the high end of matches. I also confirmed the Unicode support. [A test here: 僕の二つの言語は日本語ですよ。] But I have already consumed my morning writing time, so I'll wrap for now and hopefully will be able to figure out the context of your other reply later today. Time allowing (as always).
1shanen
This is just a test reply mostly to see what replies look like. The time-critical question about the Verification code may already be moot?

I came to a dead stop on these words, "We seek to hold true beliefs".  Beliefs are beliefs. If they were true, they would be facts. 

Also, "and to be effective at accomplishing our goals". What rational person doesn't? 

5ryan_b
Facts are independent of beliefs, which is sort of their defining characteristic. But beliefs can be in alignment with the facts, or not; the goal is the former. None. But there are no such people in the strong sense, yet. This is the ambition of the project.
2bites
After all facts are just ,,true" beliefs. 

Please start using non-serif fonts for your online articles. They are impossible to read.

4Raemon
note: TAG's solution works for https://www.greaterwrong.com/, an alternate viewing portal for LessWrong, but not for LessWrong.com. That said, I'm curious what devices you're reading it on. (some particular browsers have rendered the font particularly badly for reasons that are hard to anticipate in advance). In any case, sorry you've had a frustrating reading experience – different people prefer different fonts and it's a difficult balancing act.
2TAG
Try the "grey" or "zero" themes, in the top left corner.

I discovered this website recently when trying to do some research on model interpretability. This eventually led to research in AI alignment and the discovery of this site. Excited to be here!

3habryka
Welcome! I hope you have a good time!

I’ve long been searching for conversations not to defend my ideas, but to pursue truth.
Every time I see the community clarify its rules and principles, I’m amazed at how deeply they align with my own values.
And if you ever find any of my ideas to be unfounded, any ideology I seem to cherish - tear it down. It’s worth nothing if it can’t stand.

I have few ideas about improving lesswrong.

  1. Creation of list of articles read. Sometimes I remember some post and realize, that I’ve forgotten an idea from it, even though I need it. I want to reread it, but usually I can't find it.
  2. You know how there’s a “Watch Later” list on YouTube? It would be great to have something similar on LessWrong for articles.
  3. Also, what about some kind of paid subscription? The absence of one is the reason why Scott Alexander doesn’t write on LessWrong, and why I was considering posting on Substack, even though I appreciate LessW

... (read more)
4dirk
Not sure if you meant being able to save posts for later with #2, but if so you'll likely be pleased to learn that you can bookmark posts using the three-dot menu in the top right corner, after which they'll be available at https://www.lesswrong.com/bookmarks (also linked in the dropdown menu when you hover over your username).
3RobertM
For the first, we have the Read History page.  For the second, there are some recommendations underneath the comments section of each post, but they're not fully general.  For the third - do you mean allowing authors on LessWrong to have paid subscribers?
1Crazy philosopher
For the third- yes, I mean exactly it.
4[anonymous]
Allowing paid subscriptions changes the incentive structure for authors on LW and, as a result, has a high chance of pushing the culture of the site in a wrong direction. I'm also not particularly sure what issue your proposal is meant to solve. The fact that Scott Alexander doesn't post on LW is good, actually. As he has acknowledged, "the rationalist community was really great" and he could meaningfully contribute "new and exciting ideas," break it down into "easily digestible bits," and communicate them. Nowadays, because the lowest-hanging fruits have all been picked long ago, Scott rarely has additional fundamental insights about rationality to expand upon that haven't already become part of the LW zeitgeist. And, additionally, he can focus on politically charged topics in a way his ACX subscribers benefit from, but which (rightfully) doesn't fit too well with LW culture and guidelines. We don't need Scott Alexander posting on LW. Substack is good enough for that.
2TAG
Not in the sense of actually solving epistemology, ethics, consciousness etc. There's ole th of work to be done, but it involves backtracking..admitting you were.wrong and trying a different approach..and few are interested in that.

I'm very new here but LessWrong's deep connection to AI is one of its most fascinating aspects. It's incredible to see a community so dedicated to ensuring that powerful AI systems are safe and beneficial. The intersection of rationality, ethics, and cutting-edge technology here is truly unique...

accurate beliefs

I love the word 'accurate' here. My experience and lessons in recent years taught me that general belief like 'love' leads me to nowhere.

4Raemon
In the case of "love" I'm not sure what you mean by "belief", since love is a noun and beliefs are usually about some kind of anticipated experiences. Unless you mean more like you Believing In love? (which I don't think is that helpful to think about through the "accuracy" lens)

New to less wrong. Happy I was led to this by ChatGpt.

I encountered this website when I first heard about Roko's basilisk and at first I didn't understand why does this webstie named "LessWrong!" got to do anything with that kind,as I gone through the webstie it felt good as if I am in some search of answers which I have been looking from many years.Hope I become LessWrong! day by day.(This gui is so relaxing,even a guy who got eye problems,this is soo soothing and relaxing...)

Lastly, we do recommend that new contributors (posters or commenters) take time to familiarize themselves with the sites norms and culture to maximize the chances that your contributions are well-received.

While we should be polite, we should not have to submit to a culture in order to produce submissions.  In other words, aligning with "norms and culture" will normally produce bias.  We should not care about how "well-received" something is, rather, we should just be concerned with how right it is : )

4Ruby
I think there are two possibilities: 1. The community norms are orthogonal or opposed to figuring out what's right. In which case it's unclear why you'd want to engage with this community. Perhaps you altruistically want to improve people's beliefs, but if so, disregarding the norms and culture is a good way to be ignored (or banned), since the people bought into the culture think they're important for getting things right, and ignoring them makes your submission less likely to be worth engaging with. 2. The culture and norms in fact successfully get at things which are important for getting things right, and in disregarding them, you're actually much less likely to figure out what's true. People are justified in ignoring and downvoting you if you don't stick to them. It's also possible that there's more than one set of truth-seeking norms, but that doesn't mean it's easy to communicate across them. So better to say "over here, we operate in X, if you want to participate, please follow X norms. And I think that's legit. Of course, this is very abstract and it's possible you have examples I'd agree with.

Thank you so much. This website is fabulous!

there is two spaces here when there should be one :-) 

I love it, thanks.

Hi, not sure where to write this but something happened to this post. Curious to read it but it looks like this right now for me:

2Ruby
Sorry about that! Fixed now.

The 'latest welcome thread' link should be updated to target the tag, since somehow that bit of automation didn't get pushed back here.

3Ruby
Good suggestion! Done.

Was looking for some websites similar to academyofideas, turns out there are websites that are pure gems.

I actually prefer audio/video content to listen to while doing other physical things but this is great guys keep ut the good work there is a lot of content here, probably it will take a lifetime to finish this

[-]Mbp0-2

Is there a LessWrong for dummies? How do humans with this level of intelligence engage in typical human relationships. So many less intelligent humans have superior insight based on simplistic common sense often overlooked by over analyzing. I’m a MoreRight mindset over a LessWrong. Another site named WrongPlanet had snippets aligned to earlier theoretical AI and most contributors labeled themselves AS. I love an AS higher intelligence mindset but so much is lacking in the design of AI when significant ‘typical’ contributions are necessary for sustainable... (read more)

All right! I thought I'd give this a whirl. I've had a few words for M. Eliezer S. Yudkowsky on Twitter, or on "X as envisioned by the deathless genius of Elon Musk" I should say. Of course I never got any response to the words but I was never expecting one so that's all right! I believe that my friend Monophylos (or Mono the Unicorn) can say much the same.

Is this place actually active? It looks like it might be, at a trickle; I can't imagine the popularity of this "dark intellectual" stuff has been doing so well lately, especially now that everyone gets t... (read more)

I  think we need an actual style guide, and it needs to be prominent, properly maintained, and right here.

If it's not obvious why, and I weakly presume it isn't, it's because linguistic standardization seems like the obvious group-context form of linguistic precision, which seems like an obvious rationality virtue.

Thoughts?

8Ruby
There's something of a style guide for wiki-tagging (see the FAQ). For the site more broadly, I fear that any explicit style guide it would be possible to write would be too prescriptive and narrow. There's a wide variety of styles that suitable for the site, albeit that there's an even wider variety that isn't. In the practice, the best style guide are the great posts already on LessWrong. That's why we encourage new users to read quite a bit before posting. By reading, you get a sense of the LW discourse style.
5ryan_b
Welcome to LessWrong! We find ourselves in a perpetual tug-of-war between a desire for more reliable, higher quality posts and the ability of people to engage and contribute at all. The trade-off is this: * The higher the standard, whether style or rigor, the fewer people will write posts. To our dismay, this includes people who would actually meet the standards but fear that they would not beforehand. Naturally the potential contributions from people below the requirements are lost. * While this makes each post more productive to read, it also means that each post is higher-effort to read, which to our dismay often means posts stop being engaged with; we run the risk of churning out a small amount of posts which are very high quality but very poorly read. So striking that balance prevents us from setting much in the way of style standards; we usually prefer to let the community speak which rewards multiple styles. I myself am on the write early, write often side of the fence. The mods may have a more nuanced and up-to-date opinion with respect to meta information like writing guides.

Knowledge with certainty is possible. Knowledge with certainty is justified knowledge. There is a method to arrive at justified knowledge.

It is impossible that truth is impossible. It is impossible that existence is impossible. True + exist is my definition of real. It is impossible that real is impossible. It is impossible that reality is impossible.

Justified knowledge certainty is not only possible, it is necessary, it could not, not exist. It is necessary that we can know the truth about existence, precisely because true and exist are real, and real mea... (read more)

[+][comment deleted]1-2
[+][comment deleted]10
[+][comment deleted]00
[+][comment deleted]-30
More from Ruby
Curated and popular this week