I just stumbled upon lesswrong.com while searching for information on Zettelkasten and I must say this site is STUNNING! This is some of the most beautiful typography I've seen, anywhere! The attention to detail is exquisite! I haven't even gotten to your content yet! This will probably remain a permanently open tab in my browser... it's a work of art!
If you're interested in LW2's typography, you should take a look at GreaterWrong, which offers a different and much more old-school non-JS take on LW2, with a number of features like customizable CSS themes. (Available builtin themes include a 'LW1' theme, a 'LW2' theme, and a 'RTS' theme.) There is a second project, Read The Sequences.com (RTS), which focuses on a pure non-interactive typography-heavy presentation of a set of highly-influential LW1 posts. Finally, there's been cross-pollination between LW2/GW/RTS and my own website (description of design).
Found this site when I was a kid (hi HPMOR) & realized it wasn't all a fever dream when I got onto X a decade later! Really excited to read through posts, learn new things, and hopefully build a thinking-deeply-through-writing habit myself.
Hi all! I found my way here through hpmor, and am intrigued and a little overwhelmed by the amount of content. Where do I begin? The sequences? Latest featured posts? Is anything considered out of date at this point?
The sequences are still the place I would start. if you bounce off of that for any reason, I would start reading the content in the Codex, and then maybe give all the historical curated posts a shot. You might also want to try reading the essays that were voted as the best of 2018.
Hi there! My name is Abby. I am very new to the world of A.I.
Thanks for creating a place for me to come and have conversations with people that know much more than me. Because I have been by myself geeking out over Llama 3.1 as someone who started using it very passively to create copy for managing social media. BUT that was not what made me start becoming nearly obsessed with A.I. right now.
I have been working on a non-fiction book. And thought, hmmm let me just see what responses I get from Llama 3.1. My mind was blown. In fact, it was Llama 3.1 wh...
Hi, I am new here, I found this website by questioning ChatGPT about places on the internet where it would be possible to discuss and share information in a more civilized way than seems to be customary on the internet. I have read (some of) the suggested material, and some other bits here and there, so I have a general idea of what to expect. My first attempt at writing here was rejected as spam somehow, so I'll try again without making a slightly drawn out joke. So this is the second attempt, first post. Maybe.
I came across this site by chance thanks to a friend of mine. I'm a bit confused as to where to start? Maybe I will ask my friend again.
Oh wow, im glad i found this site in 2022. I was googling about recording every thought i have lol
Can't believe I didnt find this page before. Awesome content and a killer UI/UX - simply love it! Can't wait to explore more.
Howdy. I notice there is an old welcome page where new members of the community would introduce themselves. But that page appears to have last been posted to a year ago, and the last one before that was three years ago. Also, the comments page appears to be dominated by a discussion over whether a particular member is a troll, or not. Also, that page is not linked to here. So I gather that page is no longer the place for introductions -- is this right? Is there somewhere else that now serves that function? I'd like to get a sense of the other human beings out there.
First question is about the "Verification code" that was just sent to my already validated (6 years ago) email address. It might even be urgent? Is there some penalty if I ignore the code now that I'm apparently already logged in? (No mention of "verification" in the FAQ. I know that I did not manually enter the verification code anywhere, but the website somehow decided I was logged in anyway.)
I visited this website at least one time (6 years ago) and left a message. Then I forgot about LW until the book The AI Does Not Hate You reminded me.
My next questi...
I came to a dead stop on these words, "We seek to hold true beliefs". Beliefs are beliefs. If they were true, they would be facts.
Also, "and to be effective at accomplishing our goals". What rational person doesn't?
I discovered this website recently when trying to do some research on model interpretability. This eventually led to research in AI alignment and the discovery of this site. Excited to be here!
I’ve long been searching for conversations not to defend my ideas, but to pursue truth.
Every time I see the community clarify its rules and principles, I’m amazed at how deeply they align with my own values.
And if you ever find any of my ideas to be unfounded, any ideology I seem to cherish - tear it down. It’s worth nothing if it can’t stand.
I have few ideas about improving lesswrong.
Also, what about some kind of paid subscription? The absence of one is the reason why Scott Alexander doesn’t write on LessWrong, and why I was considering posting on Substack, even though I appreciate LessW
I'm very new here but LessWrong's deep connection to AI is one of its most fascinating aspects. It's incredible to see a community so dedicated to ensuring that powerful AI systems are safe and beneficial. The intersection of rationality, ethics, and cutting-edge technology here is truly unique...
accurate beliefs
I love the word 'accurate' here. My experience and lessons in recent years taught me that general belief like 'love' leads me to nowhere.
I encountered this website when I first heard about Roko's basilisk and at first I didn't understand why does this webstie named "LessWrong!" got to do anything with that kind,as I gone through the webstie it felt good as if I am in some search of answers which I have been looking from many years.Hope I become LessWrong! day by day.(This gui is so relaxing,even a guy who got eye problems,this is soo soothing and relaxing...)
Lastly, we do recommend that new contributors (posters or commenters) take time to familiarize themselves with the sites norms and culture to maximize the chances that your contributions are well-received.
While we should be polite, we should not have to submit to a culture in order to produce submissions. In other words, aligning with "norms and culture" will normally produce bias. We should not care about how "well-received" something is, rather, we should just be concerned with how right it is : )
Hi, not sure where to write this but something happened to this post. Curious to read it but it looks like this right now for me:
The 'latest welcome thread' link should be updated to target the tag, since somehow that bit of automation didn't get pushed back here.
Was looking for some websites similar to academyofideas, turns out there are websites that are pure gems.
I actually prefer audio/video content to listen to while doing other physical things but this is great guys keep ut the good work there is a lot of content here, probably it will take a lifetime to finish this
Is there a LessWrong for dummies? How do humans with this level of intelligence engage in typical human relationships. So many less intelligent humans have superior insight based on simplistic common sense often overlooked by over analyzing. I’m a MoreRight mindset over a LessWrong. Another site named WrongPlanet had snippets aligned to earlier theoretical AI and most contributors labeled themselves AS. I love an AS higher intelligence mindset but so much is lacking in the design of AI when significant ‘typical’ contributions are necessary for sustainable...
All right! I thought I'd give this a whirl. I've had a few words for M. Eliezer S. Yudkowsky on Twitter, or on "X as envisioned by the deathless genius of Elon Musk" I should say. Of course I never got any response to the words but I was never expecting one so that's all right! I believe that my friend Monophylos (or Mono the Unicorn) can say much the same.
Is this place actually active? It looks like it might be, at a trickle; I can't imagine the popularity of this "dark intellectual" stuff has been doing so well lately, especially now that everyone gets t...
I think we need an actual style guide, and it needs to be prominent, properly maintained, and right here.
If it's not obvious why, and I weakly presume it isn't, it's because linguistic standardization seems like the obvious group-context form of linguistic precision, which seems like an obvious rationality virtue.
Thoughts?
Knowledge with certainty is possible. Knowledge with certainty is justified knowledge. There is a method to arrive at justified knowledge.
It is impossible that truth is impossible. It is impossible that existence is impossible. True + exist is my definition of real. It is impossible that real is impossible. It is impossible that reality is impossible.
Justified knowledge certainty is not only possible, it is necessary, it could not, not exist. It is necessary that we can know the truth about existence, precisely because true and exist are real, and real mea...
and simple to express:
Err
and err
and err again
but less
and less
and less.
– Piet Hein
LessWrong is an online forum and community dedicated to improving human reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. Each day, we aim to be less wrong about the world than the day before.
See also our New User's Guide.
Training Rationality
Rationality has a number of definitions[1] on LessWrong, but perhaps the most canonical is that the more rational you are, the more likely your reasoning leads you to have accurate beliefs, and by extension, allows you to make decisions that most effectively advance your goals.
LessWrong contains a lot of content on this topic. How minds work (both human, artificial, and theoretical ideal), how to reason better, and how to have discussions that are productive. We're very big fans of Bayes Theorem and other theories of normatively correct reasoning[2].
To get started improving your Rationality, we recommend reading the background-knowledge text of LessWrong, Rationality: A-Z (aka "The Sequences") or at least selected highlights from it. After that, looking through the Rationality section of the Concepts Portal is a good thing to do.
Applying Rationality
You might value Rationality for its own sake, however, many people want to be better reasoners so they can have more accurate beliefs about topics they care about, and make better decisions.
Using LessWrong-style reasoning, contributors to LessWrong have written essays on an immense variety of topics on LessWrong, each time approaching the topic with a desire to know what's actually true (not just what's convenient or pleasant to believe), being deliberate about processing the evidence, and avoiding common pitfalls of human reason.
Check out the Concepts Portal to find essays on topics such as artificial intelligence, history, philosophy of science, language, psychology, biology, morality, culture, self-care, economics, game theory, productivity, art, nutrition, relationships and hundreds of other topics broad and narrow.
LessWrong and Artificial Intelligence
For several reasons, LessWrong is a website and community with a strong interest in AI and specifically causing powerful AI systems to be safe and beneficial.
If you want to see more or less AI content, you can adjust your Frontpage Tag Filters according to taste[3].
Getting Started on LessWrong
The New User's Guide is a great place to start.
The core background text of LessWrong is the collection of essays, Rationality: A-Z (aka "The Sequences"). Reading these will help you understand the mindset and philosophy that defines the site. Those looking for a quick introduction can start with The Sequences Highlights
Other top writings include The Codex (writings by Scott Alexander) and Harry Potter & The Methods of Rationality. Also see the Library Page for many curated collections of posts and the Concepts Portal.
Also, feel free to introduce yourself in the monthly open and welcome thread!
Lastly, we do recommend that new contributors (posters or commenters) take time to familiarize themselves with the sites norms and culture to maximize the chances that your contributions are well-received.
Thanks for your interest!
- The LW Team
Related Pages
Definitions of Rationality as used on LessWrong include:
- Rationality is thinking in ways that systematically arrive at truth.
- Rationality is thinking in ways that cause you to systematically achieve your goals.
- Rationality is trying to do better on purpose.
- Rationality is reasoning well even in the face of massive uncertainty.
- Rationality is making good decisions even when it’s hard.
-Rationality is being self-aware, understanding how your own mind works, and applying this knowledge to thinking better.
There are in fact laws of thought no less ironclad than the law of physics [source].
Hover your mouse over the tags to be able to adjust their weighting in your Latest Posts feed.