As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]
Suggestions
Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.
It's okay to attempt humor (but good luck, it's a tough crowd).
If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).
Update: Eliezer's video answers to 30 questions from this thread can be found here.
There are many reasons, but here are just a few that should be sufficient: it's much, much easier for a computer program to change its own program (which, having been artificially designed, would be far more modular and self-comprehensible than the human brain and genome, independently of how much easier it is to change bits in memory than synapses in a brain) than it is for a human being to change their own program (which is embedded in a brain that takes decades to mature and is a horrible mess of poorly understood, interdependent spaghetti code); a computer program can safely and easily make perfect copies of itself for experimentation and can try out different ideas on these copies; and a computer program can trivially scale up by adding more hardware (assuming it was designed to be parallelizable, which it would be).
First of all, it's purely conjecture that a programmed system of near human intelligence would be any simpler than a human brain. A highly complicated program such as a modern OS is practically incomprehensible to a single individual.
Second of all, there is no direct correlation between speed and intelligence. Just because a computer can scale up for more processing power doesn't mean that it's any smarter. Hence it can't all of a sudden use this technique to RSI "foom".
Third, making copies of itself is a non-trivial activity with which amounts ... (read more)