I think the reason that organizations haven't gone 'FOOM' is due to the lack of a successful "goal focused self improvement method." There is no known way of building a organization that does not suffer from goal drifting and progressive degradation of performance. Humans have not even managed to understand how to build "goals" into organization's structure except in the crudest manner which is nowhere flexible enough to survive assaults of modern environmental change, and I don't think the information in sparse inter-linkages of real organizations can store or process such information without having a significant part outsources to human scale processing, thus it couldn't even have stumbled upon it by chance.
In theory there is no reason why a computation devices build out of humans can't go FOOM. In practice, making a system that work on humans is extremely noisy, slow to change ('education' is slow) while countless experimental constraints exists with no robust engineering solutions is simply harder. Management isn't even a full science at this point. The selection power from existing theory still leaves open a vast space of unfocused exploration, and only a tiny and unknown subset of that can go FOOM. Imagine the space of all valid training manuals and organizational structures and physical aid assets and recruitment policies and so on, and our knowledge of finding the FOOMing one.
AGI running on electronic computers is a bigger threat compared to other recursive intelligence improvement problems because the engineering problems are lower and the rate of progress is higher. Most other recursive intelligence self improvement strategies take pace at "human" time scales and does not leave humans completely helpless.
If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
...and then...
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.