A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- The Worst Argument in the World
- That Alien Message
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- Your Intuitions are Not Magic
- The Planning Fallacy
- The Apologist and the Revolutionary
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
Well since I'm procrastinating on important things I might as well use this time to introduce myself. Structured procrastination for the win!
Hello everyone, I have been poking around on less wrong , slater star codex and related places for around three to four years now but mostly lurking. I have gradually become more and more taken with the risks of artificial intelligence orders of magnitudes smarter than us Homo Sapiens. In that aspect, I'm glad that the topic of a super-intelligent AI has taken off into the mainstream media and academia. EY isn't the lonely crank with no real academic affiliation, a nerdy Cassandra of his time, spewing nonsense on the internet anymore. From what I gather, status games are so cliche here that it's not cool. But with endorsements by people like Hawking and Gates, people can't easily dismiss these ideas anymore. I feel like this is a massively good thing because with these ideas up in the air so to speak, even intelligent AI researchers who disagree on these topics will probably not accidentally build an AI that will turn us all into paper clips to maximize happiness. That is not to say that there doesn't exist numerous other failure pathways. Maybe someday notions such I. J. Good's idea of a improving intelligence feedback loop will make it's way into standard AI textbooks. You don't have to join the lw sub-community to understand the risks, neither do you have to read through the sequences and all that. IMO, the greatest good less wrong has done so far for the world is to propagate and legitimate these concerns. I'm aware of the other key ideas in the memespace of lesswrong(rationality and all that) but it's hard enough to get the general public and other academics and researchers to take concern about super intelligent AI as an existential risk seriously without all sort of other ideas outside of their inference bubble.
Intellectually, my background is in physics.(currently studying, along with requisite math you pick up from physics) I have been reading philosophy for a ridiculous long time(around seven years now) although as a part time hobby. Probably like most people here, I have an incurable addiction to the internet. I also read a lot, in varied intellectual fields. I read a lot of fiction, anything from Milton to YA books. Science fiction and fantasy probably is responsible for why I find trans-humanist notions so easy to swallow. You read enough F. A Hamilton and Greg Egan and things like living forever and super intelligent machines are downright tame in comparison. I like every academic subject, gender studies doesn't count. Neuroscience, economics, computer science.. you name it. Even "fluffy" stuff like sociology and psychology and literature. I am doomed to be caught between the two cultures( C.P. Snow)
As to the stuff regarding rationality and cognitive biases, while the scientific evidence wasn't in until fairly recently. Hume anticipated all it centuries ago. Now I know lesswrong isn't very impressed with a prior armchair philosophizing without a scrap of evidence, I have to disagree on account of correct theories being much easier to build off empirical data and that deducing the correct theory to explain natural phenomenon without any empirical data in terms of experiments is much much harder. Hume had a huge possibility space while modern psychologists and cognitive scientists have a much small one. Let's not forget Hume's most famous quote. "“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” I honestly can't say I was surprised by the framework presented in the series like most people were, it's sure nice to find a community that was thinking on the same lines that I do! A lot of the tactics to apply these ideas so I can overcame these heuristics were very nice and welcome. My favorite aspect of lw has to be that people has an agreed framework to discuss things and in theory we can come to agreements. Debating is one of my favorite things to do and frankly most people are not worth arguing with and is a waste of time.
I'm interested in contributing in the study of friendly AI and have some ideas regarding it. So I might post here in the future stuff I'm thinking about. Please please feel free to criticize such posts to your heart's content. I appreciate feedback much more than I care about slights or insults so feel free to be rude. My ideas are probably old or wrong anyway, I have't had time to look through all the literature presented here or elsewhere.
Lastly, I should mention I have been active in the lesswrong irc room. If you want to find me, I'm there. also if lukeprog sees this, I really liked the the literature summaries you post sometimes. it's been a huge help and saved me a ton of time in my own exploration of the scientific literature.