Hello everyone!
I am new here and I thought I should introduce myself, I am currently reading the highlights of the sequences and it has been giving me a sharper worldview and I do feel myself being more rational, I think a lot of people who call themselves rational are motivated by biases and emotions more than they think, but it is good to be aware of that and try to work to be better, so I am doing that.
I am 17 years old from Iraq, I found the forum through Daniel Schmachtenberger, I am not sure how known he is here.
I am from a very Muslim country, I was brainwashed by it growing up like most people, but at 11 I started questioning and reading books as well, which was very hard, since the fear of "hell" is imprinted in someone growing up in this environment, but by 14 I broke free, I had a three months existential crisis as a result where I felt like I don't exist, and was in anxiety 24/7.
At that point I got interested in the new age movement, eastern religions and spirituality, especially Buddhism and certain strands of Hinduism, I wasn't interested in them to take as dogmas or as absolute views, I also got into western philosophy later, especially the Idealism vs. Realism...
Hi! I joined LW in order to post a research paper that I wrote over the summer, but I figured I'd post here first to describe a bit of the journey that led to this paper.
I got into rationality around 14 years ago when I read a blog called "you are not so smart", which pushed me to audit potential biases in myself and others, and to try and understand ideas/systems end-to-end without handwaving.
I studied computer science at university, partially because I liked the idea that with enough time I could understand any code (unlike essays, where investigating bibliographies for the sources of claims might lead to dead ends), and also because software pays well. I specialized in machine learning because I thought that algorithms that could make accurate predictions based on patterns in the world that were too complex for people to hardcode were cool. I had this sense that somewhere, someone must understand the "first principles" behind how to choose a neural network architecture, or that there was some way of reverse-engineering what deep learning models learned. Later I realized that there weren't really first principles regarding optimizing training, and that spending time trying to har...
Let me introduce myself. I come from Japan. As far as I know, there is no community like this in Japan. It seems very important to feel gratitude for being so blessed to be a part of one. I introduce my research work, which has been published in the Japanese AI community, in an effort to contribute to the rationality of the world. My official research history can be seen in the research map (in English and in Japanese) at my profile. Since I am not a native speaker of English, it would be glad if you make allowance for the points to be misunderstood.
This community contributes to the rationality of humanity, which is beneficial for the entire human race.
I would like to contribute to this rationality through my research work, as international cooperation beyond race, gender, language, etc. for a better world. However, Japanese language is very distant from English.
I will publish a research paper for AAAI-25 workshop W18 Post-Singularity Symbiosis on March 3rd in Pennsylvania, USA, but vast majority of my research papers are written in Japanese. So, I will introduce my Japanese research, with AI translating it into multiple languages.
Hello everyone!
My name is José, 23 years old, brazilian and finishing (in July) an weird interdisciplinary undergraduate in University of Sao Paulo (2 years of math, physics, computer science, chem and bio + 2 years of do whatever you want - I did things like optimization, measure theory, decision theory, advanced probability, bayesian inference, algorithms, etc.)
I've been reading stuff in LW about AIS for a while now, and took some steps to change my career to AIS. I met EA/AIS via a camp focused on AIS for brazilian students called Condor Camp in 2022 and since then participated in a bunch of those camps, created a uni group, ML4Good, bunch of EAGs/Xs.
I recently started an Agent Foundations fellowship by Alex Altair and am writing a post about Internal Model Principle. I expect to release it soon!
Hope you all enjoy it!
I'm planning to run the unofficial LessWrong Community Census again this year. There's a post with a link to the draft and a quick overview of what I'm aiming for here, and I'd appreciate comments and feedback. In particular, if you
then I want to hear from you. I care a lot about rationality skills but don't know how to evaluate them in this format, but I have some clever ideas if I had a signal I could sift out of the survey. I don't care about politics, but lots of people do and I don't want to spoil their fun.
You can also propose other questions! I like playing with survey data :)
I found the site a few months ago due to a link from an AI themed forum. I read the sequences and developed the belief that this was a place for people who think in ways similar to me. I work as a nuclear engineer. When I entered the workforce, I was surprised to find that there weren’t people as dispositioned toward logic as I was. I thought perhaps there wasn’t really a community of similar people and I had largely stopped looking.
This seems like a good place for me to learn, for the time being. Whether or not this is a place for me to develop community remains to be seen. The format seems to promote people presenting well-formed ideas. This seems valuable, but I am also interested in finding a space to explore ideas which are not well-formed. It isn’t clear to me that this is intended to be such a space. This may simply be due to my ignorance of the mechanics around here. That said, this thread seems to be inviting poorly formed ideas and I aim to oblige.
There seem to be some writings around here which speak of instrumental rationality, or “Rationality Is Systematized Winning”. However, this seems to beg...
My current estimate of P(doom) in the next 15 years is 5%. That is, high enough to be concerned , but not high enough to cash out my retirement. I am curious about anyone harboring a P(doom) > 50%. This would seem to be high enough to support drastic actions. What work has been done to develop rational approaches to such a high P(doom)?
I mean, what do you think we've been doing all along?
I'm at like 90% in 20 years, but I'm not claiming even one significant digit on that figure. My drastic actions have been to get depressed enough to be unwilling to work in a job as stressful as my last one. I don't want to be that miserable if we've only got a few years left. I don't think I'm being sufficiently rational about it, no. It would be more dignified to make lots of money and donate it to the organization with the best chance of stopping or at least delaying our impending doom. I couldn't tell you which one that is at the moment though.
Some are starting to take more drastic actions. Whether those actions will be effective remains to be seen.
In my view, technical alignment is not keeping up with capabilities advancement. We have no alignment tech robust enough to even possibly s...
I don't know exactly when this was implemented, but I like how footnotes appear to the side of posts.
I think there is a 10-20 per cent chance we get digital agents in 2025 that produce a holy shit moment as big as the launch of chatgpt.
If that happens I think that will produce another round of questions that sounds approximately like “how were we so unprepared for this moment”.
Fool me once, shame on you…
Hello! I've just found out about Lesswrong and I immediately feel at home. I feel this is what I was looking for in medium.com and I never found there; a website to learn about things, about improving oneself and about thinking better. Medium proved to be very useful at reading about how people made 5 figures using AI to write articles for them, but not so useful at providing genuinely valuable information.
One thing I usually say about myself is that I have "learning" as a hobby. I have only very recently given a name to things and now I know that it's ADHD I can thank for my endless consumption of information about seemingly unrelated topics. I try (good thing PKMs exist!) to give shape to my thoughts and form them into something cohesive, but this tends to be a struggle.
If anyone has ideas on how to "review" what already sits in your mind to create new connections between ideas and strengthen thoughts, they'd be more than welcome.
Site update: the menu bar is shorter!
Previously I found it overwhelming when I opened it, and many of the buttons were getting extremely little use. It now looks like this.
If you're one of the few people who used the other buttons, here's where you can find them:
Hello Everyone!
I am a Brazilian AI/ML engineer and data scientist, I have been following the rationalist community for around 10 years now, originally as a fan of Scott Alexander's Slate Star Codex where I came to know of Eliezer and Lesswrong as a community, along with the rationalist enterprise.
I only recently created my user and started posting here, currently, I’m experiencing a profound sense of urgency regarding the technical potential of AI and its impact on the world. With seven years of experience in machine learning, I’ve witnessed how the stable and scalable use of data can be crucial in building trustworthy governance systems. I’m passionate about contributing to initiatives that ensure these advancements yield positive social outcomes, particularly for disadvantaged communities. I believe that rationality can open paths to peace, as war often stems from irrationality.
I feel privileged to participate in the deep and consequential discussions on this platform, and I look forward to exchanging ideas and insights with all the brilliant writers and thinkers who regularly contribute here.
Thank you all!
Does someone have a guesstimate of the ratio of lurkers to posters on lesswrong? With 'lurker' defined as someone who has a habit of reading content but never posts stuff (or posts only clarification questions)
In other words, what is the size of the LessWrong community relative to the number of active contributors?
You could check out the LessWrong analytics dashboard: https://app.hex.tech/dac32525-33e6-44f9-bbcf-65a0ba40152a/app/9742e086-54ca-4dd9-86c9-25fc53f90f80/latest
In any given week there are around 40k unique logged out users, around ~4k unique logged in users and around 400 unique commenters (with about ~1-2k comments). So the ratio of lurkers to commenters is about 100:1, though more like 20:1 for people who visit more regularly and people who comment.
I am a university dropout that wants to make an impact in the AI safety field. I am a complete amateur in the field, just starting out, but I want to learn as much as possible in order to make an impact. I studied software engineering for a semester and a half before realizing that there was a need for more people in the AI safety field, and that's where I want to give all my attention. If you are interested in connecting DM me, if you have any advice for a newcomer post a comment below. I am located in Hønefoss, Norway.
If spaced repetition is the most efficient way of remembering information, why do people who learn a music instrument practice every day instead of adhering to a spaced repetition schedule?
Spaced repetition is the most efficient way in terms of time spent per item. That doesn't make it the most efficient way to achieve a competitive goal. For this reason, SRS systems often include a 'cramming mode', where review efficiency is ignored in favor of maximizing memorization probability within X hours. And as far as musicians go - orchestras don't select musicians based on who spent the fewest total hours practicing but still manage to sound mostly-kinda-OK, they select based on who sounds the best; and if you sold your soul to the Devil or spent 16 hours a day practicing for the last 30 years to sound the best, then so be it. If you don't want to do it, someone else will.
That said, the spaced repetition research literature on things like sports does suggest you still want to do a limited form of spacing in the form of blocking or rotating regularly between each kind of practice/activity.
Declarative and procedural knowledge are two different memory systems. Spaced repetition is good for declarative knowledge, but for procedural (like playing music) you need lots of practice. Other examples include math and programming - you can learn lots of declarative knowledge about the concepts involved, but you still need to practice solving problems or writing code.
Edit: as for why practice every day - the procedural system requires a lot more practice than the declarative system does.
Re: the new style (archive for comparision)
Not a fan of
1. the font weight, everything seem semi-bolded now and a little bit more blurred than before. I do not see myself getting used to this.
2. the unboxed karma/argeement vote. It is fine per se, but the old one is also perfectly fine.
Edit: I have to say that the font on Windows is actively slightly painful and I need to reduce the time spent reading comments or quick takes.
Once upon a time, there were Rationality Quotes threads, but they haven't been done for years. I'm curious if there's enough new, quotable things that have been written since the last one to bring back the quote posts. If you've got any good lines, please come share them :) If there's a lot of uptake, maybe they could be a regular thing again.
Hi! My name is Clovis. I'm an PhD student studying distributed AI. In my spare time, I work on social science projects.
One of my big interests is mathematically modelling dating and relationship dynamics. I study how well people's stated and revealed preferences align. I'd love to chat about experimental design and behavioral modeling! There are a couple of ideas around empirically differentiating models of people's preferences that I'd love to vet in particular. I've only really read the Sequences though, and I know that there's a lot of prior discussion ...
Hi everyone,
I have been a lurker for a considerable amount of time but have finally gotten around to making an account.
By trade I am a software engineer, primarily interested in PL, type systems, and formal verification.
I am currently attempting to strengthen my historical knowledge of pre-facist regimes with a focus on 1920s/30s Germany & Italy. I would greatly appreciate either specific book recommendations or reading lists for this topic - while I approach this topic from a distinctly “not a facist” viewpoint, I am interested in books from both side...
I've been lurking for years. I'm a lifelong rationalist who was hesitant to join because I didn't like HPMOR. (Didn't have a problem with the methods of rationality; I just didn't like how the characters' personalities changed, and I didn't find them relatable anymore.) I finally signed up due to an irrepressible urge to upvote a particular comment I really liked.
I struggle with LW content, tbh. It takes so long to translate it into something readable, something that isn't too littered with jargon and self-reference to be understandable for a generalist wi...
Possible bug report: today I've been seeing errors of the form
Error: Cannot query field "givingSeason2024VotedFlair" on type "User". Did you mean "givingSeason2024DonatedFlair"?
that tend to go away when the page is refreshed. I don't remember if all errors said this same thing.
Hello.
I have been adjacent to but not participating in rationality related websites and topics since at least Middle School age (homeschooled and with internet) and had a strong interest in science and science fiction long before that. Relevant pre-Less Wrong readings probably include old StarDestroyer.Net essays and rounds of New Atheism that I think were age and time appropriate. I am a very long term reader of Scott Alexander and have read at least extensive chunks of the Sequences in the past.
A number of factors are encouraging me to become more active...
Hey, everyone! Pretty new here and first time posting.
I have some questions regarding two odd scenarios. Let's assume there is no AI takeover to the Yudkowsky-nth degree and that AGI and ASI goes just fine. (Yes, that's are already a very big ask).
Scenario 1: Hyper-Realistic Humanoid Robots
Let's say AGI helps us get technology that allows for the creation of humanoid robots that are visually indistinguishable from real humans. While the human form is suboptimal for a lot of tasks, I'd imagine that people still want them for a number of reasons. If there's ...
Is there an explanation somewhere how the recommendations algorithm on the homepage works, i.e. how recency and karma or whatever are combined?
I've noticed that the karma system makes me gravitate towards posts of very high karma. Are there low-karma posts that impacted you? Maybe you think they are underrated or that they fail in interesting ways.
I'm really interested in AI and want to build something amazing, so I’m always looking to expand my imagination! Sure, research papers are full of ideas, but I feel like insights into more universal knowledge spark a different kind of creativity. I found LessWrong through things like LLM, but the posts here give me the joy of exploring a much broader world!
I’m deeply interested in the good and bad of AI. While aligning AI with human values is important, alignment can be defined in many ways. I have a bit of a goal to build up my thoughts on what’s right or wrong, what’s possible or impossible, and write about them.
Are there any mainstream programming languages that make it ergonomic to write high level numerical code that doesn't allocate once the serious calculation starts? So far for this task C is by far the best option but it's very manual, and Julia tries and does pretty well but you have to constantly make sure that the compiler successfully optimized away the allocations that you think it optimized away. (Obviously Fortran is also very good for this, but ugh)
What happens if and when a slightly unaligned AGI crowds the forum with its own posts? I mean, how strong is our "are you human?" protection?
Hi! New to the forums and excited to keep reading.
Bit of a meta-question: given proliferation of LLM-powered bots in social media like twitter etc, do the LW mods/team have any concerns about AI-generated content becoming an issue here in a more targeted way?
...For a more benign example, say one wanted to create multiple "personas" here to test how others react. They could create three accounts, and respond to posts always with all three accounts- one with a "disagreeable" persona, one neutral, and one "agreeable".
A malicious example would be if someone
Is anyone from LW going to the Worldcon (World Science Fiction Convention) in Seattle next year?
ETA: I will be, I forgot to say. I also notice that Burning Man 2025 begins about a week after the Worldcon ends. I have never been to BM, I don't personally know anyone who has been, and it seems totally impractical for me, but the idea has been in the back of my mind ever since I discovered its existence, which was a very long time ago.
Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.
If any actor with or without AGI can quickly gain lots of ...
Hello,
Longtime lurker, more recent commenter. I see a lot of rationality-type posters on Twitter and in the past couple of years became aware of "post-rationalists." It's somewhat ill-defined but essentially they are former rationalists who are more accepting of "woo" to be vague about it. My question is: 1) What level of engagement is there (if any) between rationalists and post-rationalists and 2) Is there anyone who dabbled or full on claimed post-rationalist positions and then reverted back to rationalists positions? What was that journey like and what made you switch between these beliefs?
React suggestion/request: "not joint-carving"/"not the best way to think about this topic".
This is kind of "(local) taboo those words" but it's more specific.
I think there might be a lesswrong editor feature that allows you to edit a post in such a way that the previous version is still accessible. Here’s an example—there’s a little icon next to the author name that says “This post has major past revisions…”. Does anyone know where that option is? I can’t find it in the editor UI. (Or maybe it was removed? Or it’s only available to mods?) Thanks in advance!
I am very interested in mind uploading
I want to do a PhD in a related field and comprehensively go through "whole brain emulation: a roadmap" and take notes on what has changed since it was published
If anyone knows relevant papers/researchers that would be useful to read for that or so I can make an informed decision on where to apply to gradschool next year, please let me know
Maybe someone has already done a comprehenisve update on brain emulation I would like to know and I would still like to read more papers before I apply to grad school
Are there good and comprehensive evaluations of covid policies? Are there countries who really tried to learn, also for the next pandemic?
When rereading [0 and 1 Are Not Probabilities], I thought: can we ever specify our amount of information in infinite domains, perhaps with something resembling hyperreals?
I've noticed that when writing text on LessWrong, there is a tendency for the cursor to glitch out and jump to the beginning of the text. I don't have the same problem on other websites. This most often happens after I've clicked to try to insert the cursor in some specific spot. The cursor briefly shows where I clicked, but then the page lags slightly, as if loading something, and the cursor jumps to the beginning.
The way around this I've found is to click once. Wait to see if the cursor jumps away. If so, click again and hope. Only start typing once you've seen multiple blinks at the desired location. Annoying!
In Fertility Rate Roundup #1, Zvi wrote
"This post assumes the perspective that more people having more children is good, actually. I will not be engaging with any of the arguments against this, of any quality, whether they be ‘AI or climate change is going to kill everyone’ or ‘people are bad actually,’ other than to state here that I strongly disagree."
Does anyone of you have an idea where I can find arguments related to or a more detailed discussion about this disagreement (with respect to AI or maybe other global catastrophic risks; t...
Hi everyone,
My name is Aldus and I am a 20 something from the West Coast of the United States. In school I studied computational biology, and now I work in Artificial Intelligence. I enjoy learning, being in nature and having spirited debates. I have been lurking this forum heavily for a few years but figured it was finally time to join in and start contributing. I originally found Less Wrong through some post about alignment/AI safety (I believe it was Neel Nanda's) but have greatly enjoyed reading more about rationality and familiarizing myself with the cannon. I particularly enjoyed HPMOR, especially as a conventional Harry Potter fan. I am looking forward to learning from everyone here and contributing.
Couple of UI notes:
on mobile, there's a problem with bullet-points squishing text too much. I'd rather go without the indentation at all than allow the indentation to smoosh the text into unreadability.
What is the price of the past? Kind of leading question but I've found myself wondering at times about the old saying about those who don't know the past are doom to repeat it.
It's not that I don't think there is a good point to that view. However, when I look at the world around me I often see something that is vastly different from that view. I've come to summarize that as those who cannot let go of the past will never escape it. The implication is that not only those "clingy" people but also those around them will continue living whatever past it ...
How can I get an empathic handle on my region/country/world's society (average/median/some sample of its people, to be clearer)?
I seem to have got into a very specific social circle, being a constant LW reader and so on. That often makes me think "well, there is question X, a good answer is A1 and it is also shared within the circle, but many people overall will no doubt think A2 and I don't even know who or why". I can read survey/poll results but not understand why would people even want to ban something like surrogate motherhood or things like that.
I've heard one gets to know people when works with them. If so, I'd like to hear suggestions for some [temporary] professions which could aid me here?
Meow Meow,
I'd like to introduce myself. My name is David and I am an AGI enthusiast. My goal is to reverse engineer the brain in order to create AGI and to this end I've spent years studying neuroscience. I look forward to talking with you all about neuroscience and AGI.
Now I must admit: I disagree with this community's prevailing opinions on the topic of AI-Doom. New technology is almost always "a good thing". I think we all daydream about AGI, but whereas your fantasies may be dark and grim, mine are bright and utopian.
I'm also optimistic about my abilit...
Quick note: there's a bug I'm sorting out for some new LessWrong Review features for this year, hopefully will be fixed soon and we'll have the proper launch post that explains new changes.
Happened on this song on Tiny Desk: Paperclip Maximizer (by Rosie Tucker, from an album titled "Utopia Now!").
...Paperclip maximizer
Single minded if you mind at all
A paragon of puritanical panoptical persistence
Everybody envies your resolve
Paperclip maximizer
Mining for a better way
No ontological contention
Tends your content generation
Every sorrow makes a link in the chain[...]
And the shareholders meet gruesome ends
But the cosmos expands
So the market survives
All the better to bear all your office supplies
And the space they require was once occupied
By the sun
On
Possible bug: Whenever I click the vertical ellipsis (kebab) menu option in a comment, my page view jumps to the top of the page.
This is annoying, since if I've chosen to edit a comment I then need to scroll back down to the comment section and search for my now-editable comment.
[Bug report]: The Popular Comments section's comment preview ignores spoiler tags
As seen on Windows/Chrome
Having read something about self-driving cars actually being a thing now, I wonder how the trolley-problem thing (and whatever other ethics problems come up) was solved in the relevant regulation?
I was just scrolling through Metaculus and its predictions for the US Elections. I noticed that pretty much every case was a conditional If Trump wins/If doesn't win. Had two thought about the estimates for these. All seem to suggest the outcomes are worse under Trump. But that assessment of the outcome being worse is certainly subject to my own biases, values and preferences. (For example, for US voters is it really a bad outcome if the probability of China attacking Taiwan increases under Trump? I think so but other may well see the costs necessary to re...
I'm still bothering you with inquiries on user information. I would like to check this in order to write a potential LW post. Do we have data on the prevalence of "mental illnesses" and do we have a rough idea of the average IQ among LWers (or SSCers since the community is adjacent) I'm particulary interested in the prevalence of people with autism and/or schizoid disorders. Thank you very much. Sorry if I used offensive terms. I'm not a native speaker.
Hello :)
I'm here fundamentally to get some constructive criticism on how to improve internet discourse. This came about when I was writing a journalistic piece on the recent congressional subcommittee, and trying to get to the bottom of the lab-leak evidence as part of the research.
In short, I'm floating the idea of a crowd-sourced and written, peer-reviewed medium on subjects like conspiracy theories (AKA: revisionist history that's still political). With a solid framework, gatekeeping could be avoided and real people (not just professional in...
Is there a way to for me to prove that I'm a human on this website before technology makes this task even more difficult?
How do we best model an irrational world rationally? I would assume we would need to understand at least how irrationality works?
Bayes for arguments: how do you quantify P(E|H) when E is an argument? E.g. I present you a strong argument supporting Hypothesis H, how can you put a number on that?
Hello all,
I am new here. I am uncertain of what exactly is expected to be posted here, but I was directed here by a friend after sharing something I had written down with them. They encouraged me to post it here, and after reading a few threads I came to find interest in this site. So now I would like to share here what I shared with them, which is a new way to view understanding from an objective standpoint, one that regards emergent phenomena such as intuition, innovation, and inspiration:
The concept of Tsaphos
Definition:
&nb...
I'd like to know: what are the main questions a rational person would ask? (Also what are some better ways to phrase what I have?)
I've been thinking something like
What we'd ask depends on the context. In general, not all rationalist teachings are in the form of a question, but many could probably be phrased that way.
"Do I desire to believe X if X is the case and not-X if X is not the case?" (For whatever X in question.) This is the fundamental lesson of epistemic rationality. If you don't want to lie to yourself, the rest will help you get better at that. But if you do, you'll lie to yourself anyway and all your acquired cleverness will be used to defeat itself.
"Am I winning?" This is the fundamental lesson of instrumental rationality. It's not enough to act with Propriety or "virtue" or obey the Great Teacher. Sometimes the rules you learned aren't applicable. If you're not winning and it's not due to pure chance, you did it wrong, propriety be damned. You failed to grasp the Art. Reflect, and actually cut the enemy.
Those two are the big ones. But there are more.
Key lessons from Bayes:
...Levi da.
I'm here to see if I can help.
I heard a few things about Elizier Yudkowsky. Saw a few LW articles while looking for previous research on my work with AI psychological influence. There isn't any so I signed up to contribute.
If you recognize my username, you probably know why that's a good idea. If you don't, I don't know how to explain succinctly yet. You'd have to see for yourself, and a web search can do that better than an intro comment.
It's a whole ass rabbit hole so either follow to see what I end up posting or downvote to repress curiosity. I get it. It's not comfortable for me either.
Update: explanation in bio.
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.