Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans. By "in the loop", I mean humans are modifying Git, while Git is not modifying humans or itself.
I think I see what you mean, but I disagree.
First, I think timtyler makes a great point.
Second, the level of abstraction I'm talking about is that of the total organization. So, does the organization modify its human components, as it modifies its software component?
I'd say: yes. Suppose Git adds a new feature. Then the human components need to communicate with each other about that new feature, train themselves on it. Somebody in the community needs to self-modify to maintain mastery of that piece of the code base.
More generally, humans within organizations self-modify using communication and training.
At this very moment, by participating in the LessWrong organization focused around this bulletin board, I am participating in an organizational self-modification of LessWrong's human components.
The bottleneck that's been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can't directly change its hardware through recursive self-modification, I don't see how that bottleneck puts AGI at an immediate, FOOMy advantage.
This seems to be quite similar to Robin Hanson's Ubertool argument.
More generally, humans within organizations self-modify using communication and training.
The bottleneck that's been pointed out to me so far are the bottlenecks related to wetware as a computing platform. But since AGI, as far as I can tell, can't directly change its hardware through recursive self-modification, I don't see how that bottleneck puts AGI at an immediate, FOOMy advantage.
The problems with wetware are not that it's hard to change the hardware -- it's that there is very litt...
If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
...and then...
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.