As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]
Suggestions
Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.
It's okay to attempt humor (but good luck, it's a tough crowd).
If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).
Update: Eliezer's video answers to 30 questions from this thread can be found here.
I disagree. Humans don't recursively self-improve? What about a master pianist? Do they just start out a master pianist or do they gradually improve their technique until they reach their mastery? Humans show extreme capability in the action of learning, and this is precisely analogous to what you call "recursive self-improvement".
Of course the source code isn't known now and I left that open in my statement. But even if the source code were completely understood, as I said before, this does not imply we would understand (or could even figure out) how to make it fundamentally better. For example -- given the Windows OS code, could you fundamentally improve it? (I'm sure this is a terrible example, so if you hate Windows, substitute in your favorite OS program instead.) It's not clear that you could, even given the code.
To be more precise, recursive self-improvement in humans (like learning how to learn more effectively) is limited to small improvements and few recursions. It is of a fundamentally different nature than recursive self-improvement would be for an agent that had access to and understood its source code and was able to recursively self-improve at the source code level for a large number of recursions in relatively quick succession. The analogous kind of recursive self-improvement in humans would be if it were possible to improve your intelligence to a signifi... (read more)