A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- The Worst Argument in the World
- That Alien Message
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- Your Intuitions are Not Magic
- The Planning Fallacy
- The Apologist and the Revolutionary
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Once a post gets over 500 comments, the site stops showing them all by default. If this post has 500 comments and you have 20 karma, please do start the next welcome post; a new post is a good perennial way to encourage newcomers and lurkers to introduce themselves. (Step-by-step, foolproof instructions here; takes <180seconds.)
If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post.
Finally, a big thank you to everyone that helped write this post via its predecessors!
Hi everyone!
I'm John Ku. I've been lurking on lesswrong since its beginning. I've also been following MIRI since around 2006 and attended the first CFAR mini-camp.
I became very interested in traditional rationality when I used analytic philosophy to think my way out of a very religious upbringing in what many would consider to be a cult. After I became an atheist, I set about rebuilding my worldview and focusing especially on metaethics to figure out what remains of ethics without God.
This process landed me in University of Michigan's Philosophy PhD program, during which time I read Kurzweil's The Singularity is Near. This struck me as very important and I quickly followed a chain of references and searches to discover what was to become MIRI and the lesswrong community. Partly due to lesswrong's influence, I dropped out of my PhD program to become a programmer and entrepreneur and I now live in Berkeley and work as CTO of an organic growth startup.
I have, however, continued my philosophical research in my spare time, focusing largely on metaethics, psychosemantics and metaphilosophy. I believe I have worked out a decent initial overview of how to formalize a friendly utility function. The major pieces include:
Since I think much of philosophy boils down to conceptual analysis, and I've also largely worked out how to assign an intensional semantics to a decision algorithm, I think my research also has the resources to meta-philosophically validate that the various philosophical propositions involved are correct. I hope to fill in many remaining details in my research and find a way to communicate them better in the not too distant future.
Compared to others, I think of myself as having been focused more on object-level concerns than more meta-level instrumental rationality improvements. But I would like to thank everyone for their help which I'm sure I've absorbed over time through lesswrong and the community. And if any attempts to help have backfired, I would assume it was due to my own mistakes.
I would also like to ask for any anonymous feedback, which you can submit here. Of course, I would greatly appreciate any non-anonymous feedback as well; an email to ku@johnsku.com would be the preferred method.
You are welcome! And Don't Be Afraid of Asking Personally Important Questions of Less Wrong.
I understand that you might not want to give details but I'm unclear what information I might provide. Maybe you could drop a few hints. You might also look at the Baseline of my opinion on LW topics.