As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]
Suggestions
Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.
It's okay to attempt humor (but good luck, it's a tough crowd).
If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).
Update: Eliezer's video answers to 30 questions from this thread can be found here.
I didn't mean convenient in the sense of compressibility, but convenient in the sense of representing our preference ordering in a form that lets one then talk about stuff like "how can I get the world into the best possible state, where 'best' is in terms of my values?" in terms of maximizing utility, and when combined with uncertainty, maximizing expected utility.
I just meant "utility doesn't automatically imply a specific set of values/virtues. It's more a way of organizing your virtues so that you can at least formally define optimal actions, giving you a starting point to look for ways to approximately compute such things, etc.."
Or did I misunderstand your point completely?
The phrase "how can I get the world into the best possible state" is explicitly consequentialist. Non-consequentialists (e.g. "The end does not justify the means") do not admit that correct behavior is getting the world into the best possible state.
Non-utilitarians probably perceive suggestions of maximizing utility, maximizing expected utility, and (in particular) approximating those two as very dangerous and likely to lead to incorrect behavior.
The original poster implied that there is a difference between seeking to maximize utility ... (read more)