What I wrote was "it's not that hard a problem, really, for one with (list of qualifications most people don't have)," which is importantly different from what you quote.
I extend my denial to the full list. I do not believe Eliezer has made the claim that you allege he has made, even with the list of qualifications. It would be a plainly wrong claim and I believe you have made a mistake in your recollection.
The flip side is that if Eliezer has actually claimed that it isn't a hard problem (with the list of qualifications) then I assert that said claim significantly undermines Eliezer's credibility in my eyes.
OK, cool.
Do you also still maintain that if he thought it wasn't a hard problem for people with the right qualifications, he wouldn't be comfortable dismissing particular instances of AI researchers as mostly harmless?
I intended Leveling Up in Rationality to communicate this:
But some people seem to have read it and heard this instead:
This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:
So, I have a few questions: