If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.
If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread. Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.
You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.
If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.
A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box. This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)
Hi, I'm Lincoln. I am 25; I live and work in Cambridge, MA. I currently build video games but I'm going to start a Ph.D program in Computer Science at the local university in the fall.
I identified rationality as a thing to be achieved ever since I knew there was a term for it. One of the minor goals I had since I was about 15 was devising a system of morality which fit with my own intuitions but which was consistent under reflection (but not in so many words). The two thought experiments I focused on were abortion and voting. I didn't come up with an answer, but I knew that such a morality was a thing I wanted -- consistency was important to me.
I ran across Eliezer's work 907 days ago reading a Hacker News post about the AI-box experiment, and various other Overcoming Bias posts that were submitted over the years. I didn't immediately follow through on that stuff.
But I became aware of SIAI about 10 months ago, when rms on Hacker News linked an interesting post about the Visiting Fellows program at SIAI.
I think I had a "click" moment: I immediately saw that AI was both an existential risk and major opportunity, and I wanted to work on these things to save the world. I followed links and ended up at LW; I didn't immediately understand the connection between AI and rationality, but they both looked interesting and useful, so I bookmarked LW.
I immediately sent in an application to the Visiting Fellows program, thinking "hey, I should figure out how to do this" -- I think it was Jasen who responded and asked me by email to summarize the purpose of SIAI and how I thought I could contribute. I wrote the purpose summary, but got stuck on how to contribute. I had barely read any of the Sequences at that time and had no idea how I could be useful. For those reasons (as well as a healthy dose of akrasia), I gave up on my application at that time.
Somewhere in there I found HP:MoR (perhaps via TVTropes?), saw the author was "Less Wrong" and made the connection.
Since then, I have been inhaling the Sequences; in the last month I've been checking the front page almost daily. I applied to the Rationality Boot Camp.
I'm very far from being a rationalist -- I can see that my rationality skills are really quite poor, but I at least identify as a student of rationality.
Hey, I am in kind of in a similar situation as you. I've worked on making games (as a programmer) for several years, and currently I'm working on a game of my own, where I incorporate certain ideas from LessWrong. I've been wondering lately if I could contribute more if I did FAI related research. What convinced you to switch to it? How much do you think you'll contribute? How talented are you and how much of a deciding factor was that?