Hi, do you read the LessWrong website, but haven't commented yet (or not very much)? Are you a bit scared of the harsh community, or do you feel that questions which are new and interesting for you could be old and boring for the older members?
This is the place for the new members to become courageous and ask what they wanted to ask. Or just to say hi.
The older members are strongly encouraged to be gentle and patient (or just skip the entire discussion if they can't).
Newbies, welcome!
The long version:
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
- The Worst Argument in the World
- That Alien Message
- How to Convince Me that 2 + 2 = 3
- Lawful Uncertainty
- Your Intuitions are Not Magic
- The Planning Fallacy
- The Apologist and the Revolutionary
- Scope Insensitivity
- The Allais Paradox (with two followups)
- We Change Our Minds Less Often Than We Think
- The Least Convenient Possible World
- The Third Alternative
- The Domain of Your Utility Function
- Newcomb's Problem and Regret of Rationality
- The True Prisoner's Dilemma
- The Tragedy of Group Selectionism
- Policy Debates Should Not Appear One-Sided
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
I don't think the point of the special thread is so much to teach people LW-specific things to enable them to participate, as to overcome shyness and intimidation and the like. That's a problem people have everywhere, and doesn't call for anything LW-specific (except in so far as the people here are unusual, which they might be). In some cases, a newcomer's shyness and intimidation might be because they feel they don't know or understand something, and they could ask about that -- but, again, similar things could happen anywhere and any LW-specific-ness would come out of the specific questions people ask.
So there's a theorem that says that under certain circumstances an agent either (behaves exactly as if it) has a utility function and tries to maximize its expected value, or is vulnerable to certain kinds of undesirable outcome. So, e.g., if you're trying to build an AI that you trust with superhuman power then you might want it to have a utility function.
But humans certainly don't behave exactly as if we have utility functions, at least not sensible ones. It's often easy to get someone to switch between preferring A over B and preferring B over A just by changing the words you use to describe A and B, for instance; and when trying to make difficult decisions, most people don't do anything much like an expected-utility calculation.
And the vNM theorem, unsurprisingly, makes a bunch of technical assumptions that don't necessarily apply to real people in the real world -- and, further, to get from "if you don't do X you will run into trouble Y" to "you should do X" you need to know that the adverse consequences of doing X aren't worse than Y, which for resource-limited agents like us they might be. (Indeed, doing X might be simply impossible for us, or for whatever other agents we're considering; e.g., if you care about the welfare of people 100 years from now, evaluating your "utility function"'s expectation would require making detailed probabilistic predictions about all the ways the world could be 100 years from now; good luck with that!)
It's fairly common in these parts to talk as if people have utility functions -- to say "my utility function has a large term for such-and-such", etc. I take that to be shorthand for something more like "in some circumstances, understood from context, my behaviour crudely resembles that of an agent whose utility function has such-and-such properties". Anyone talking about humans' utility functions and expecting much more precision than that is probably fooling themselves.
Does that help at all, or am I just telling you things you already understand well?
Thanks for that explanation of utility functions, gjm, and thanks to protostar for asking the question. I've been struggling with the same issue, and nothing I've read seems to hold up when I try to apply it to a concrete use case.
What do you think about trying to build a utility TABLE for major, point-in-time life decisions, though, like buying a home or choosing a spouse?
P.S. I'd upvote your response to protostar, but I can't seem to make that happen.