If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- Your Intuitions are Not Magic
- The Apologist and the Revolutionary
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- The Planning Fallacy
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
- That Alien Message
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
As an aside, I'll note that a lot of the solutions bandied around here to decision theory problems remind me of something from Magic: The Gathering which I took notice of back when I still followed it.
When I watched my friends play, one would frequently respond to another's play with "Before you do that, I-" and use some card or ability to counter their opponent's move. The rules of MTG let you do that sort of thing, but I always thought it was pretty silly, because they did not, in fact, have any idea that it would make sense to make that play until after seeing their opponent's move. Once they see their opponent's play, they get to retroactively decide what to do "before" their opponent can do it.
In real life, we don't have that sort of privilege. If you're in a Counterfactual Mugging scenario, for instance, you might be inclined to say "I ought to be the sort of person who would pay Omega, because if the coin had come up the other way, I would be making a lot of money now, so being that sort of person would have positive expected utility for this scenario." But this is "Before you do that-" type reasoning. You could just as easily have ended up in a situation where Omega comes and tells you "I decided that if you were the sort of person who would not pay up in a Counterfactual Mugging scenario, I would give you a million dollars, but I've predicted that you would, so you get nothing."
When you come up with a solution to an Omega-type problem involving some type of precommitment, it's worth asking "would this precommitment have made sense when I was in a position of not knowing Omega existed, or having any idea what it would do even if it did exist?"
In real life, we sometimes have to make decisions dealing with agents who have some degree of predictive power with respect to our thought processes, but their motivations are generally not as arbitrary as those attributed to Omega in most hypotheticals.
Can you give a specific example of a bandied-around solution to a decision-theory problem where predictive power is necessary in order to implement that solution?
I suspect I disagree with you here -- or, rather, I agree with the general principle you've articulated, but I suspect I disagree that it's especially relevant to anything local -- but it's difficult to be sure without specifics.
With respect to the Counterfactual Mugging you reference in passing, for example, it seems enough to say "I ought to be the sort of person who would do whatever gets... (read more)