Here's why I'm not going to give money to the SIAI any time soon.
Let's suppose that Friendly AI is possible. In other words, it's possible that a small subset of humans can make a superhuman AI which uses something like Coherent Extrapolated Volition to increase the happiness of humans in general (without resorting to skeevy hacks like releasing an orgasm virus).
Now, the extrapolated volition of all humans is probably a tricky thing to determine. I don't want to get sidetracked into writing about my relationship history, but sometimes I feel like it's hard to extrapolate the volition of one human.
If it's possible to make a Friendly superhuman AI that optimises CEV, then it's surely way easier to make an unFriendly superhuman AI that optimises a much simpler variable, like the share price of IBM.
Long before a Friendly AI is developed, some research team is going to be in a position to deploy an unFriendly AI that tries to maximise the personal wealth of the researchers, or the share price of the corporation that employs them, or pursues some other goal that the rest of humanity might not like.
And who's going to stop that happening? If the executives of Corporation X are in a position to unleash an AI with a monomaniacal dedication to maximising the Corp's shareholder value, it's probably illegal for them not to do just that.
If you genuinely believe that superhuman AI is possible, it seems to me that, as well as sponsoring efforts to design Friendly AI, you need to (a) lobby against AI research by any groups who aren't 100% committed to Friendly AI (pay off reactionary politicians so AI regulation becomes a campaign issue, etc.) (b) assassinate any researchers who look like they're on track to deploying an unFriendly AI, then destroy their labs and backups.
But SIAI seems to be fixated on design at the expense of the other, equally important priorities. I'm not saying I expect SIAI to pursue illegal goals openly, but there is such a thing as a false-flag operation.
While Michelle Bachmann isn't talking about how AI research is a threat to the US constitution, and Ben Goertzel remains free and alive, I can't take the SIAI seriously.
Not really. "Maximize the utility of this one guy" isn't much easier than "Maximize the utility of all humanity" when the real problem is defining "maximize utility" in a stable way. If it were, you could create a decent (though probably not recommended) approximation to Friendly AI problem just by saying "Maximize the utility of this one guy here who's clearly very nice and wants what's best for humanity."
There are some serious problems with getting something that takes interpersonal conflicts into account in a reasonable way, but that's not where the majority of the problem lies.
I'd even go so far as to say that if someone built a successful IBM-CEO-utility-maximizer it'd be a net win for humanity, compared to our current prospects. With absolute power there's not a lot of incentive to be an especially malevolent dictator (see Moldbug's Fhnargl thought experiment for something similar) and in a post-scarcity world there'd be more than enough for everyone including IBM executives. It'd be sub-optimal, but compared to Unfriendly AI? Piece of cake.
If somebody was going to build an IBM profit AI, (of the sort of godlike AI that people here talk about) it would almost certainly end up doubling as the IBM CEO Charity Foundation AI.