Here's why I'm not going to give money to the SIAI any time soon.
Let's suppose that Friendly AI is possible. In other words, it's possible that a small subset of humans can make a superhuman AI which uses something like Coherent Extrapolated Volition to increase the happiness of humans in general (without resorting to skeevy hacks like releasing an orgasm virus).
Now, the extrapolated volition of all humans is probably a tricky thing to determine. I don't want to get sidetracked into writing about my relationship history, but sometimes I feel like it's hard to extrapolate the volition of one human.
If it's possible to make a Friendly superhuman AI that optimises CEV, then it's surely way easier to make an unFriendly superhuman AI that optimises a much simpler variable, like the share price of IBM.
Long before a Friendly AI is developed, some research team is going to be in a position to deploy an unFriendly AI that tries to maximise the personal wealth of the researchers, or the share price of the corporation that employs them, or pursues some other goal that the rest of humanity might not like.
And who's going to stop that happening? If the executives of Corporation X are in a position to unleash an AI with a monomaniacal dedication to maximising the Corp's shareholder value, it's probably illegal for them not to do just that.
If you genuinely believe that superhuman AI is possible, it seems to me that, as well as sponsoring efforts to design Friendly AI, you need to (a) lobby against AI research by any groups who aren't 100% committed to Friendly AI (pay off reactionary politicians so AI regulation becomes a campaign issue, etc.) (b) assassinate any researchers who look like they're on track to deploying an unFriendly AI, then destroy their labs and backups.
But SIAI seems to be fixated on design at the expense of the other, equally important priorities. I'm not saying I expect SIAI to pursue illegal goals openly, but there is such a thing as a false-flag operation.
While Michelle Bachmann isn't talking about how AI research is a threat to the US constitution, and Ben Goertzel remains free and alive, I can't take the SIAI seriously.
For pure pragmatic reasons, peaceful methods would be still preferable to violent ones...
Why Terrorism Does Not Work
This is the first article to analyze a large sample of terrorist groups in terms of their policy effectiveness. It includes every foreign terrorist organization (FTO) designated by the U.S. Department of State since 2001. The key variable for FTO success is a tactical one: target selection. Terrorist groups whose attacks on civilian targets outnumber attacks on military targets do not tend to achieve their policy objectives, regardless of their nature. Contrary to the prevailing view that terrorism is an effective means of political coercion, the universe of cases suggests that, first, contemporary terrorist groups rarely achieve their policy objectives and, second, the poor success rate is inherent to the tactic of terrorism itself. The bulk of the article develops a theory for why countries are reluctant to make policy concessions when their civilian populations are the primary target.