As promised, here is the "Q" part of the Less Wrong Video Q&A with Eliezer Yudkowsky.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed to a few paragraphs, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) Eliezer hasn't been subpoenaed. He will simply ignore the questions he doesn't want to answer, even if they somehow received 3^^^3 votes.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for at least 7 days. After that, it is up to Eliezer to decide when the best time to film his answers will be. [Update: Today, November 18, marks the 7th day since this thread was posted. If you haven't already done so, now would be a good time to review the questions and vote for your favorites.]
Suggestions
Don't limit yourself to things that have been mentioned on OB/LW. I expect that this will be the majority of questions, but you shouldn't feel limited to these topics. I've always found that a wide variety of topics makes a Q&A more interesting. If you're uncertain, ask anyway and let the voting sort out the wheat from the chaff.
It's okay to attempt humor (but good luck, it's a tough crowd).
If a discussion breaks out about a question (f.ex. to ask for clarifications) and the original poster decides to modify the question, the top level comment should be updated with the modified question (make it easy to find your question, don't have the latest version buried in a long thread).
Update: Eliezer's video answers to 30 questions from this thread can be found here.
I would ask the same question to other AGI organizations if I could, but this is a Q&A with only Eliezer (though I'm also curious to know if he knows anything about what other groups are doing with regards to this).
Regardless of who is the first to get to AGI, that group could potentially run into the kind of problems I mentioned. I never said it was the most probable thing that can go wrong. But it should probably be looked into seriously since, if it does happen, it could be pretty catastrophic.
The way I see it, either AGI is developed in secret and Eliezer could be putting the finishing touches on the code right now without telling anyone, or it'll be developed in a fairly open way, with mathematical and algorithmic breakthroughs discussed at conferences, on the net, in papers, whatever. If the latter is the case, some big breakthroughs could attract the attention of powerful organizations (or even of AGI researchers who have enough of a clue to understand these breakthroughs, but that also know they're too far behind to catch up, so the best way for them to get there first is to somehow convince an intelligence agency to steal the code or whatever - again specifics are not important here, just the general principle of what to do with security as we get closer to full AGI).
Yes, industrial espionage is a well-known phenomenon. Look at all the security which Apple uses to keep their prototypes hidden. People have died for Apple's secrets:
http://www.taranfx.com/blog/iphone-4g-secret-prototype-leads-to-a-death