The road to wisdom? Well, it's plain
and simple to express:

Err
and err
and err again
but less
and less
and less.

– Piet Hein

LessWrong is an online forum and community dedicated to improving human reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. Each day, we aim to be less wrong about the world than the day before.

See also our New User's Guide.

Training Rationality

Rationality has a number of definitions[1] on LessWrong, but perhaps the most canonical is that the more rational you are, the more likely your reasoning leads you to have accurate beliefs, and by extension, allows you to make decisions that most effectively advance your goals.

LessWrong contains a lot of content on this topic. How minds work (both human, artificial, and theoretical ideal), how to reason better, and how to have discussions that are productive. We're very big fans of Bayes Theorem and other theories of normatively correct reasoning[2].

To get started improving your Rationality, we recommend reading the background-knowledge text of LessWrong, Rationality: A-Z (aka "The Sequences") or at least selected highlights from it. After that, looking through the Rationality section of the Concepts Portal is a good thing to do.

Applying Rationality

You might value Rationality for its own sake, however, many people want to be better reasoners so they can have more accurate beliefs about topics they care about, and make better decisions.

Using LessWrong-style reasoning, contributors to LessWrong have written essays on an immense variety of topics on LessWrong, each time approaching the topic with a desire to know what's actually true (not just what's convenient or pleasant to believe), being deliberate about processing the evidence, and avoiding common pitfalls of human reason.

Check out the Concepts Portal to find essays on topics such as artificial intelligence, history, philosophy of science, language, psychology, biology, morality, culture, self-care, economics, game theory, productivity, art, nutrition, relationships and hundreds of other topics broad and narrow.

LessWrong and Artificial Intelligence

For several reasons, LessWrong is a website and community with a strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.

  • AI is a field concerned with how minds and intelligence works, overlapping a lot with rationality.
  • Historically, LessWrong was seeded by the writings of Eliezer Yudkowsky, an artificial intelligence researcher.
  • Many members of the LessWrong community are heavily motivated by trying to improve the world as much as possible, and these people were convinced many years ago that AI was a very big deal for the future of humanity. Since then LessWrong has hosted a lot of discussion of AI Alignment/AI Safety, and that's only accelerated recently with further AI capabilities developments.
    • LessWrong is also integrated with the Alignment Forum
    • The LessWrong team who maintain and develop the site are predominantly motivated by trying to cause powerful AI outcomes to be good.

If you want to see more or less AI content, you can adjust your Frontpage Tag Filters according to taste[3].

Getting Started on LessWrong

The New User's Guide is a great place to start.

The core background text of LessWrong is the collection of essays, Rationality: A-Z (aka "The Sequences"). Reading these will help you understand the mindset and philosophy that defines the site. Those looking for a quick introduction can start with The Sequences Highlights

Other top writings include The Codex (writings by Scott Alexander) and Harry Potter & The Methods of Rationality. Also see the Library Page for many curated collections of posts and the Concepts Portal.

Also, feel free to introduce yourself in the monthly open and welcome thread!

Lastly, we do recommend that new contributors (posters or commenters) take time to familiarize themselves with the sites norms and culture to maximize the chances that your contributions are well-received.

Thanks for your interest!

- The LW Team

 

 

  1. ^

    Definitions of Rationality as used on LessWrong include:

    - Rationality is thinking in ways that systematically arrive at truth.

    - Rationality is thinking in ways that cause you to systematically achieve your goals.

    - Rationality is trying to do better on purpose.

    - Rationality is reasoning well even in the face of massive uncertainty.

    - Rationality is making good decisions even when it’s hard.

    -Rationality is being self-aware, understanding how your own mind works, and applying this knowledge to thinking better.

  2. ^

    There are in fact laws of thought no less ironclad than the law of physics [source].

  3. ^

    Hover your mouse over the tags to be able to adjust their weighting in your Latest Posts feed.

New to LessWrong?

New Comment
51 comments, sorted by Click to highlight new comments since: Today at 6:37 AM

I just stumbled upon lesswrong.com while searching for information on Zettelkasten and I must say this site is STUNNING! This is some of the most beautiful typography I've seen, anywhere! The attention to detail is exquisite! I haven't even gotten to your content yet! This will probably remain a permanently open tab in my browser... it's a work of art!

If you're interested in LW2's typography, you should take a look at GreaterWrong, which offers a different and much more old-school non-JS take on LW2, with a number of features like customizable CSS themes. (Available builtin themes include a 'LW1' theme, a 'LW2' theme, and a 'RTS' theme.) There is a second project, Read The Sequences.com (RTS), which focuses on a pure non-interactive typography-heavy presentation of a set of highly-influential LW1 posts. Finally, there's been cross-pollination between LW2/GW/RTS and my own website (description of design).

Thanks to gwern for the mention of GW/RTS!

In the interests of giving equal screen time to the (friendly!) ‘competition’, here’s yet another viewer site for Less Wrong—one which takes an even more low-key and minimalist approach:

https://lw2.issarice.com/

Shows only blank white page RN. Mind to update/delete it?

It’s not my website, so that question isn’t really for me, sorry.

Oh, good, I've contacted the owner and they responded it was necessary to get their IP address whitelisted by LW operators. That should resolve soon.

W-o-W!!! Thanks so much for these links!

Could you expand on what makes the typography noteworthy? I'm completely unaware of this topic, but curious.

Good question. I will try to explain why the typography is noteworthy, rather than the mechanics of making it so. First, the small sans-serif font here is exceptionally readable. That isn't easy. Site-specific browser magnification is typically necessary on other websites.

Next, there is the range of choice offered within the user interface for comments. Having a choice of LaTeX, markdown, rich text (as well as built in features such as footnotes) for posts would be unusual, yet LW offers it for comments as well!

Finally, please see gwern's examples for LW2 linked above. I find GreaterWrong challenging to read, and confusing to navigate. Not for me, but maybe for thee! ReadTheSequences uses serif fonts but has traditional typographical elements that give it elegance, yet is still spaced and kerned such that it is easily readable. The more elegant typeface is used sparingly, for important LW1 posts which is part of good typography too. Hope that helps.

Thank you so much. This website is amazing.

Hi all! I found my way here through hpmor, and am intrigued and a little overwhelmed by the amount of content. Where do I begin? The sequences? Latest featured posts? Is anything considered out of date at this point?

The sequences are still the place I would start. if you bounce off of that for any reason, I would start reading the content in the Codex, and then maybe give all the historical curated posts a shot. You might also want to try reading the essays that were voted as the best of 2018.

I will do just that. Thank you.

I came across this site by chance thanks to a friend of mine. I'm a bit confused as to where to start? Maybe I will ask my friend again.

Check out the starting guide in the FAQ!

Oh wow, im glad i found this site in 2022. I was googling about recording every thought i have lol

I came to a dead stop on these words, "We seek to hold true beliefs".  Beliefs are beliefs. If they were true, they would be facts. 

Also, "and to be effective at accomplishing our goals". What rational person doesn't? 

Facts are independent of beliefs, which is sort of their defining characteristic. But beliefs can be in alignment with the facts, or not; the goal is the former.

What rational person doesn't? 

None. But there are no such people in the strong sense, yet. This is the ambition of the project.

After all facts are just ,,true" beliefs. 

[-][anonymous]3y30

Howdy. I notice there is an old welcome page where new members of the community would introduce themselves. But that page appears to have last been posted to a year ago, and the last one before that was three years ago. Also, the comments page appears to be dominated by a discussion over whether a particular member is a troll, or not.  Also, that page is not linked to here. So I gather that page is no longer the place for introductions -- is this right? Is there somewhere else that now serves that function? I'd like to get a sense of the other human beings out there.

People now introduce themselves in the monthly Open and Welcome threads :)

What mingyuan said!

The last paragraph, small omission, says 'under' should be 'understand'. Sorry.

Fixed! Thank you!

First question is about the "Verification code" that was just sent to my already validated (6 years ago) email address. It might even be urgent? Is there some penalty if I ignore the code now that I'm apparently already logged in? (No mention of "verification" in the FAQ. I know that I did not manually enter the verification code anywhere, but the website somehow decided I was logged in anyway.)

I visited this website at least one time (6 years ago) and left a message. Then I forgot about LW until the book The AI Does Not Hate You reminded me.

My next question is about a better website, but perhaps the premises of my question are false. If so, then I hope someone will enlighten me. I think I know what I am looking for, and this does not seem to be it (even though I do like "the feel" of the website. I think this website has a one-dimensional rating system for karma (along the lines of Slashdot?), but I think reality is more complicated and I am looking for a thoughtful discussion website with a deeper representation of reality and more dimensions.

I could describe what I am seeking in much more detail, but for my first comment in a long time, and basically a practice post, I think I should just Submit now (and look around some more). This welcome-to-lesswrong seems to be a "Hello, World" starting place. So "Hello, world". See ya around?

Welcome back! I'm not sure what happened with the verification email, but if you're here, you're here.

Regards to dimensions, we've though about this but it's tricky and competes with all the other things we do, but is an entirely fair question. If you find somewhere you think is better, please let us know!

Thank you for your reply. I'm pretty sure you meant "thought" rather than something like "been through this [before]". [And later I got detoured into the Chat help and had some trouble recovering to this draft...]

As regards your closing, I believe the trite reply is "No fair! I asked you first." ;-) [I recently read The Semiotics of Emoji and would insert a humorous one if it were available.[But in chat it appeared to convert the one I just used. Here?]] 

I am considering submitting a new question, either for this question or for your other reply (which might relate to a long comment I wrote on karma (but I can't see the full context from here) or about LW's financial model (in the context of how it influences discussions on LW).

With regards to this question, I can already say that LW seems to be solidly implemented and matches the features of any discussion website that I know of. Not the same, but at the high end of matches. I also confirmed the Unicode support. [A test here: 僕の二つの言語は日本語ですよ。]

But I have already consumed my morning writing time, so I'll wrap for now and hopefully will be able to figure out the context of your other reply later today. Time allowing (as always).

This is just a test reply mostly to see what replies look like. The time-critical question about the Verification code may already be moot?

Please start using non-serif fonts for your online articles. They are impossible to read.

note: TAG's solution works for https://www.greaterwrong.com/, an alternate viewing portal for LessWrong, but not for LessWrong.com.

That said, I'm curious what devices you're reading it on. (some particular browsers have rendered the font particularly badly for reasons that are hard to anticipate in advance). In any case, sorry you've had a frustrating reading experience – different people prefer different fonts and it's a difficult balancing act.

Try the "grey" or "zero" themes, in the top left corner.

there is two spaces here when there should be one :-) 

Is there a LessWrong for dummies? How do humans with this level of intelligence engage in typical human relationships. So many less intelligent humans have superior insight based on simplistic common sense often overlooked by over analyzing. I’m a MoreRight mindset over a LessWrong. Another site named WrongPlanet had snippets aligned to earlier theoretical AI and most contributors labeled themselves AS. I love an AS higher intelligence mindset but so much is lacking in the design of AI when significant ‘typical’ contributions are necessary for sustainable design to integrate in typical human life. AI, if taken to a next level of basic old brain underlying the high functioning new brain 🧠 and designed to replicate personality and physical traits would be a goal.

I love it, thanks.

Hi, not sure where to write this but something happened to this post. Curious to read it but it looks like this right now for me:

Sorry about that! Fixed now.

The 'latest welcome thread' link should be updated to target the tag, since somehow that bit of automation didn't get pushed back here.

Good suggestion! Done.

Was looking for some websites similar to academyofideas, turns out there are websites that are pure gems.

I actually prefer audio/video content to listen to while doing other physical things but this is great guys keep ut the good work there is a lot of content here, probably it will take a lifetime to finish this

I  think we need an actual style guide, and it needs to be prominent, properly maintained, and right here.

If it's not obvious why, and I weakly presume it isn't, it's because linguistic standardization seems like the obvious group-context form of linguistic precision, which seems like an obvious rationality virtue.

Thoughts?

There's something of a style guide for wiki-tagging (see the FAQ).

For the site more broadly, I fear that any explicit style guide it would be possible to write would be too prescriptive and narrow. There's a wide variety of styles that suitable for the site, albeit that there's an even wider variety that isn't.

In the practice, the best style guide are the great posts already on LessWrong. That's why we encourage new users to read quite a bit before posting. By reading, you get a sense of the LW discourse style.

Welcome to LessWrong!

We find ourselves in a perpetual tug-of-war between a desire for more reliable, higher quality posts and the ability of people to engage and contribute at all. The trade-off is this:

  • The higher the standard, whether style or rigor, the fewer people will write posts. To our dismay, this includes people who would actually meet the standards but fear that they would not beforehand. Naturally the potential contributions from people below the requirements are lost.
  • While this makes each post more productive to read, it also means that each post is higher-effort to read, which to our dismay often means posts stop being engaged with; we run the risk of churning out a small amount of posts which are very high quality but very poorly read.

So striking that balance prevents us from setting much in the way of style standards; we usually prefer to let the community speak which rewards multiple styles. I myself am on the write early, write often side of the fence.

The mods may have a more nuanced and up-to-date opinion with respect to meta information like writing guides.

Knowledge with certainty is possible. Knowledge with certainty is justified knowledge. There is a method to arrive at justified knowledge.

It is impossible that truth is impossible. It is impossible that existence is impossible. True + exist is my definition of real. It is impossible that real is impossible. It is impossible that reality is impossible.

Justified knowledge certainty is not only possible, it is necessary, it could not, not exist. It is necessary that we can know the truth about existence, precisely because true and exist are real, and real means true and exist are ultimately simultaneous = everywhere all-at-once, even if only some of us know that with certainty. If someone does not know this, that is what ignorance is.

LessWrong is an excellent platform for writers who think deeply about truth.

I have great respect for what the LessWrong platform is all about, but I believe it would be instructive to deconstruct the choice of name for the platform.

The reveal for my decision to deconstruct the name LessWrong is that the name is itself necessarily a fatal logical contradiction, i.e., infinite regress. Fatal logical infinite regress is certainly not a ground for truth, nor a ground for certainty, nor a ground for justified knowledge.

Less is a degree of the category wrong. Less-wrong-to-infinity is still wrong, therefore, infinite regress.

Incremental gains of knowledge are normal and necessary, but always wrong is certainly not.

In fact, we do not go from wrong to less wrong, we go from knowledge to more knowledge, and as necessary, change our minds about what we know, based upon new information.

Justified knowledge can only be grounded in a set of natural a-priori axioms that are not the result of any empirical observation; you either see them or you do not. Nor are they subject to any kind of proof, e.g., some imagined empirical test, or mathematical proof, nor are they disprovable. All further discourse about existence and truth depends upon a set of natural a-priori axioms.

See my Substack posts for an expanded discussion: 

Less Wrong platform and author Yudkowsky, on Rationality and Justified Knowledge Certainty

https://allink.substack.com/p/justified-knowledge-certainty-and-f4b

Natural a-priori Axioms

https://allink.substack.com/p/justified-knowledge-certainty-and

[+][comment deleted]1y10
[+][comment deleted]1y00
[+][comment deleted]1y-30