As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]
Suggestions
Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.
It's okay to attempt humor (but good luck, it's a tough crowd).
If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).
Update: Eliezer's video answers to 30 questions from this thread can be found here.
Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not 'compassionate' as potentially irrational and thus counterfeit and re-interpret it accordingly.
Well - in brevity bordering on libel: the fundamental assumption is that existence is preferable to non-existence, however in order so we can want this to be a universal maxim (and thus prescriptive instead of merely descriptive - see Kant's categorical imperative) it needs to be expanded to include the 'other'. Hence the utility function becomes 'ensure continued co-existence' by which the concern for the self is equated with the concern for the other. Being rational is simply our best bet at maximizing our expected utility.
...I'm sorry, that doesn't even sound plausible to me. I think you need a lot of assumptions to derive this result - just pointing out the two I see in your admittedly abbreviated summary:
I don't see any reason to believe either. The former is false right off the bat - a paperclip maximizer would prefer that its components be used to make paperclips - and the latter no less so - an effective paperclip maximizer will just steamroller over disagreement without qualm, however arbitrary its goal.