I have a sense you misunderstood me. I picture this parallel world where legitimate, rational inferences about the AI risk exist, and where this risk is worth working at in 2013 and stands out among the other risks, as well as any other pre-requisites for making MIRI worthwhile hold. And in this imaginary world, I expect massively larger support than "Steven Hawkins hooked up with FHI" or what ever you are outlining here.
Right, this just goes back to the same disagreement in our models I was trying to address earlier by making predictions. Let me try something else, then. Here are some relevant parts of my model:
Luke, why are you arguing with Dmytry?
Another month has passed and here is a new rationality quotes thread. The usual rules are: