On October 29th, I asked Eliezer and the LW community if they were interested in doing a video Q&A. Eliezer agreed and a majority of commenters were in favor of the idea, so on November 11th, I created a thread where LWers could submit questions. Dozens of questions were asked, generating a total of over 650 comments. The questions were then ranked using the LW voting system.
On December 11th, Eliezer filmed his replies to the top questions (skipping some), and sent me the videos on December 22nd. Because voting continued after that date, the order of the top questions in the original thread has changed a bit, but you can find the original question for each video (and the discussion it generated, if any) by following the links below.
Thanks to Eliezer and everybody who participated.
Update: If you prefer to download the videos, they are available here (800 MB, .wmw format, sort the files by 'date created').
Eliezer Yudkowsky - Less Wrong Q&A (5/30) from MikeGR on Vimeo.
(Video #5 is on Vimeo because Youtube doesn't accept videos longer than 10 minutes and I only found out after uploading about a dozen. I would gladly have put them all on Vimeo, but there's a 500 MB/week upload limit and these videos add up to over 800 MB.)
If anything is wrong with the videos or links, let me know in the comments or via private message.
I wonder why Eliezer doesn't want to say anything concrete about his work with Marcello? ("Most of the real progress that has been made when I sit down and actually work on the problem is things I'd rather not talk about")
There seem to be only two plausible reasons:
For 1. someone else stealing his work and finishing a provably friendly AI first would be a good thing, would it not? Losing the chance to do it himself shouldn't matter as much as the fate of the future intergalactic civilization to an altruist like him. Maybe his work on provable friendliness would reveal ideas on AI design that could be used to produce an unfriendly AI? But even then the ideas would probably only help AI researchers who work on transparent design, are aware of the friendliness problem and take friendliness serious enough to mine the work on friendliness of the main proponent of friendliness for useful ideas. Wouldn't giving these people a relative advantage compared to e. g. connectivists be a good thing? Unless he thinks that AGI would then suddenly be very close while FAI still is far away... Or maybe he thinks a partial solution to the friendliness problem would make people overconfident and less cautious than they would otherwise be?
As for 2. the work so far might be very unimpressive, reveal embarrassing facts about a previous state of knowledge, or be subject to change and a publicly apparent change of opinion be deemed disadvantageous. Or maybe Eliezer fears that publicly revealing some things would psychologically commit him to them in ways that would be counterproductive?
All FAIs are AGIs, most of the FAI problem is solving the AGI problem in particular ways.