The idea of creating an AI seems to be getting more common.
To the extent that creating an AI is made easier by having more resources rather than by having more carefully thought out philosophy, the first AI will be created by a government or a business, not SIAI. I think the more resources side is the way to bet, but I'm open to argument.
If this is correct, the best strategy for Friendliness may be to keep working on the philosophy but not expect to code, and publicize the risks of Unfriendliness, both seriously and humorously.
The latter is based on something Scott Adams said (for what that's worth)-- that no one ever realizes they're the pointy-haired boss, but if anyone says "that plan sounds like something out of Dilbert", the plan is immediately taken out of consideration.
The good news, such as it is, is that the mistakes likely to be made by corporations and governments can be presented as funnier (or at least more entertaining to people who already dislike those institutions) than those likely to be made by people who are unthinkingly trying to create utopia.
ETA: It's conceivable that a large organization could have SIAI folks heading its AI project, but this doesn't seem likely.
It is amazing how much difference words like 'just' and 'a few' can make! This is an extremely hard problem. All sorts of other skills are required but those skills are commodities. They already exist, people have them, you buy them.
What is required to solve something like AIs that are stable when upgrading is extremely intelligent individuals studying the best work in several related fields full time for 15 years... and then having 'a few insights'.
I think that XiXiDu's point is that the theory and implementation can not cleanly be divorced. You may need to be constantly programming and trying out the ideas your theory spit out in order to guide and shape the theory to its final, correct form. We can't necessarily just wait until the theory is developed and then buy the available skill needed to implement it.