I blew through all of MoR in about 48 hours, and in an attempt to learn more about the science and philosophy that Harry espouses, I've been reading the sequences and Eliezer's posts on Less Wrong. Eliezer has written extensively about AI, rationality, quantum physics, singularity research, etc. I have a question: how correct has he been? Has his interpretation of quantum physics predicted any subsequently-observed phenomena? Has his understanding of cognitive science and technology allowed him to successfully anticipate the progress of AI research, or has he made any significant advances himself? Is he on the record predicting anything, either right or wrong?
Why is this important: when I read something written by Paul Krugman, I know that he has a Nobel Prize in economics, and I know that he has the best track record of any top pundit in the US in terms of making accurate predictions. Meanwhile, I know that Thomas Friedman is an idiot. Based on this track record, I believe things written by Krugman much more than I believe things written by Friedman. But if I hadn't read Friedman's writing from 2002-2006, then I wouldn't know how terribly wrong he has been, and I would be too credulous about his claims.
Similarly, reading Mike Darwin's predictions about the future of medicine was very enlightening. He was wrong about nearly everything. So now I know to distrust claims that he makes about the pace or extent of subsequent medical research.
Has Eliezer offered anything falsifiable, or put his reputation on the line in any way? "If X and Y don't happen by Z, then I have vastly overestimated the pace of AI research, or I don't understand quantum physics as well as I think I do," etc etc.
My understanding is that for the most part SI prefers not to publish the results of their AI research, for reasons akin to those discussed here. However they have published on decision theory, presumably because it seems safer than publishing on other stuff and they're interested in attracting people with technical chops to work on FAI:
http://singinst.org/blog/2010/11/12/timeless-decision-theory-paper-released/
I would guess EY sees himself as more of a researcher than a forecaster, so you shouldn't be surprised if he doesn't make as many predictions as Paul Krugman.
Also, here's a quote from his paper on cognitive biases affecting judgment of global risks:
So he wasn't born a rationalist. (I've been critical of him in the past, but I give him a lot of credit for realizing the importance of cognitive biases for what he was doing and popularizing them for such a wide audience.) My understanding was that one of the primary purposes of the sequences was to get people to realize the importance of cognitive biases at a younger age than he did.
Obviously I don't speak for SI or Eliezer, so take this with a grain of salt.
OK. If that is the case, then I think that a fair question to ask is what have his major achievements in research been?
But secondly, a lot of the discussion on LW and most of EY's research presupposes certain things happening in the future. If AI is actually impossible, then trying to design a friendly AI is a waste of time (or, alternately, if AI won't be developed for 10,000 years, then developing a friendly AI is not an urgent matter). What evidence can EY offer that he's not wasting his time, to put it bluntly?