Carl, Robin's response to this post was a critical comment about the proposed content of Eliezer's AI's motivational system. I assumed he had a reason for making the comment, my bad.
Oh, and Friendliness theory (to the extent it can be separated from specific AI architecture details) is like the doomsday device in Dr. Strangelove: it doesn't do any good if you keep it secret! [in this case, unless Eliezer is supremely confident of programming AI himself first]
Regarding the 2004 comment, AGI Researcher probably was referring to the Coherent Extrapolated Volition document which was marked by Eliezer as slightly obsolete in 2004, and not a word since about any progress in the theory of Friendliness.
Robin, if you grant that a "hard takeoff" is possible, that leads to the conclusion that it will eventually be likely (humans being curious and inventive creatures). This AI would "rule the world" in the sense of having the power to do what it wants. Now, suppose you get to pick what it wants (and program that in). What would you pick? I can see arguing with the feasibility of hard takeoff (I don't buy it myself), but if you accept that step, Eliezer's intentions seem correct.
When Robin wrote: "It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions." he gets it exactly right (though it is not necessarily so easy to make good ones, that isn't really the point).
This should have been clear from the sequence on the "timeless universe" -- just as that interesting abstraction is not going to convince more than a few credulous fans of the truth of that abstraction, the truth of the magical super-FOOM is not going to convince anybody without more substantial support than an appeal to a very specific way of looking at "things in general", which few are going to share.
On a historical time frame, we can grant pretty much everything you suppose and still be left with a FOOM that "takes" a century (a mere eyeblink in comparison to everything else in history). If you want to frighten us sufficiently about a FOOM of shorter duration, you're going to have to get your hands dirtier and move from abstractions to specifics.
The issue, of course, is not whether AI is a game-changer. The issue is whether it will be a game-changer soon and suddenly. I have been looking forward to somebody explaining why this is likely, so I've got my popcorn popped and my box of wine in the fridge.
Perhaps Eliezer goes to too many cocktail parties:
X: "Do you build neural networks or expert systems?" E: "I don't build anything. Mostly I whine about people who do." X: "Hmm. Does that pay well?"
Perhaps Bayesian Networks are the hot new delicious lemon glazing. Of course they have been around for 23 years.
Silas: you might find this paper of some interest:
burger flipper, making one decision that increases your average statistical lifespan (signing up for cryonics) does not compel you to trade off every other joy of living in favor of further increases. and, if the hospital or government or whoever can't be bothered to wait for my organs until i am done with them, that's their problem not mine.