They can't use one improvement to fuel another, they would have to come up with the next one independently
I disagree.
Suppose an organization has developers who work in-house on their issue tracking system (there are several that do--mostly software companies).
An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).
Taken as a whole, the developers, issue tracker implementation, and issue tracker source code are part of the distributed cognition of the organization.
I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition.
The point isn't that an AGI has or does not have certain skills. It's that it has the ability to learn those skills. Deep Blue doesn't have the capacity to learn anything other than playing chess, while humans, despite never running into a flute in the ancestral environment, can learn to play the flute.
Fair enough. But then we should hold organizations to the same standard. Suppose, for whatever reason, an organization needs better-than-median-human flute-playing for some purpose. What then?
Then they hire a skilled flute-player, right?
I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.
My point in the OP was that organizations and the hypothetical AGI have comparable kinds of intelligence, so we can think about them as comparable superintelligences.
I think that in this case, an organization's self-improvement to the issue tracker source code recursively 'fuels' other improvements to the organization's cognition.
Yes, it can fuel improvement. But not to the same level that an AGI that is foom-ing would. See this thread for details: http://lesswrong.com/lw/g3m/intelligence_explosion_in_organizations_or_why_im/85zw
...I think we may be arguing over an issue of semantics. I agree with you substantively that general intelligence is about adaptability, gaining and losing skills as needed.
My point in the
If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
...and then...
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.