What would you tell the first climate scientist to examine global warming, or the first to predict asteroid strikes, other than "do more research, and get others to do research as well"?
I have no problem with a billion dollars spend on friendly AI research. But that doesn't mean that I agree that the SIAI needs a billion dollars right now or that I agree that the current evidence is enough to tell people to stop researching cancer therapies or create educational videos about basic algebra. I don't think we know enough about risks from AI to justify such advice. I also don't think that we should all become expected utility maximizer's because we don't know enough about economics, game theory, and decision theory and especially about human nature and the nature of discovery.
Why do I feel like there's massively more evidence than "a few blog posts"?
Maybe because there is massively more evidence and I don't know about it, don't understand it, haven't taken it into account or because I am simply biased. I am not saying that you are wrong and I am right.
...those on human history, and lumping it all under "what intelligent agents can accomplish".
Shortly after human flight was invented we reached the moon. Yet human flight is not as sophisticated as bird or insect flight, it is much more inefficient, and we never reached other stars. Therefore, what I get out of this, shortly after we invent artificial general intelligence we might reach human-level intelligence and in some areas superhuman intelligence. But that doesn't mean that it will be particularly fast or efficient, or that it will be able to take over the world shortly after. Artificial general intelligence is already an inference made from what we currently believe to be true, going a step further and drawing further inferences from previous speculations, e.g. explosive recursive self-improvement, is in my opinion a very shaky business. We have no idea about the nature of discovery, if intelligence (whatever that is) is even instrumental or quickly hits diminishing returns.
In principle we could build antimatter weapons capable of destroying worlds, but in practise it is much harder to accomplish. The same seems to be the case for intelligence. It is not intelligence in and of itself that allows humans to accomplish great feats. Someone like Einstein was lucky to be born into the right circumstances, the time was ripe for great discoveries.
Another large part of being convinced falls under a lack of counterarguments - rather, there are plenty out there, just none that seem to have put thought into the matter.
Prediction: The world is going to end.
Got any counterarguments I couldn't easily dismiss?
Most of the superficially disjunctive lines of reasoning about risks from AI derive their appeal from their inherent vagueness. It's not like you don't need any assumptions to be true to get "artificial general intelligence that can undergo explosive recursive self-improvement to turn all matter in the universe into paperclips". That's a pretty complex prediction actually.
There are various different scenarios regarding the possibility and consequences of artificial general intelligence. I just don't see why the one put forth by the SIAI is more likely to be true than others. Why for example would intelligence be a single principle that, once discovered, allows us to grow superhuman intelligence overnight? Why are we going to invent artificial general intelligence quickly, rather than having to painstakingly optimize our expert systems over many centuries? Why would intelligence be effectively applicable to intelligence itself, rather than demanding the discovery of unknown unknowns due to sheer luck or the pursuit of treatments for rare diseases in cute kittens? Why would general intelligence be at all efficient compared to expert systems, maybe general intelligence demands a tradeoff between plasticity and goal-stability? I can think of dozens of possibilities within minutes, none of them leading to existential risk scenarios.
Shortly after human flight was invented we reached the moon. Yet human flight is not as sophisticated as bird or insect flight, it is much more inefficient, and we never reached other stars.
How do you mean? Human planes are faster and can transport freight better. They can even self-pilot with modern AI software. The biggest weaknesses would seem to be a lack of self-reproduction and self-repair, but those aren't really part of flight.
Link: overcomingbias.com/2011/07/debating-yudkowsky.html