Basically this: "Eliezer Yudkowsky writes and pretends he's an AI researcher but probably hasn't written so much as an Eliza bot."
While the Eliezer S. Yudkowsky site has lots of divulgation articles and his work on rationality is of indisputable value, I find myself at a loss when I want to respond to this. Which frustrates me very much.
So, to avoid this sort of situation in the future, I have to ask: What did the man, Eliezer S. Yudkowsky, actually accomplish in his own field?
Please don't downvote the hell out of me, I'm just trying to create a future reference for this sort of annoyance.
My understanding is that Tim thinks de novo AI is very probably very near, leaving little time for brain emulation, and that far more resources will go into de novo AI, or that incremental insights into the brain would enable AI before emulation becomes possible.
On the other hand, FHI folk are less confident that AI theory will cover all the necessary bases in the next couple decades, while neuroimaging continues to advance apace. If neuroimaging at the relevant level of cost and resolution comes quickly while AI theory moves slowly, processing the insights from brain imaging into computer science may take longer than just running an emulation.