Organizations recursively self-improve all the time, but there is an upper bound on how much organizations have been able to improve so far, and that upper bound is catastrophic.
There will always be some finite upper bound on the extent to which existing agents will have been able to improve so far.
Google has managed to improve quite a bit since the chimpanzee-like era, and it hasn't stopped yet. Evidently the "upper bound" is a long, long way above the starting point - and not very "catastrophic".
True. My point was that if it was easy for an organization to become much more powerful than it is now, and the organization was motivated to do so, then it would already be much more powerful than it is now, so we should not expect a sudden increase in organizations' self-improvement abilities unless we can identify a good reason that it is particularly likely. The increased ease of self-modification offered by being completely digital is such a reason, but since organizations are not completely digital, this does not offer a way for organizations to suddenly increase their rate of self-improvement unless we can upload an organization.
If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
...and then...
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.