If I understand the Singularitarian argument espoused by many members of this community (eg. Muehlhauser and Salamon), it goes something like this:
- Machine intelligence is getting smarter.
- Once an intelligence becomes sufficiently supra-human, its instrumental rationality will drive it towards cognitive self-enhancement (Bostrom), so making it a super-powerful, resource hungry superintelligence.
- If a superintelligence isn't sufficiently human-like or 'friendly', that could be disastrous for humanity.
- Machine intelligence is unlikely to be human-like or friendly unless we take precautions.
I'm in danger of getting into politics. Since I understand that political arguments are not welcome here, I will refer to these potentially unfriendly human intelligences broadly as organizations.
Smart organizations
By "organization" I mean something commonplace, with a twist. It's commonplace because I'm talking about a bunch of people coordinated somehow. The twist is that I want to include the information technology infrastructure used by that bunch of people within the extension of "organization".
Do organizations have intelligence? I think so. Here's some of the reasons why:
- We can model human organizations as having preference functions. (Economists do this all the time)
- Human organizations have a lot of optimization power.
I talked with Mr. Muehlhauser about this specifically. I gather that at least at the time he thought human organizations should not be counted as intelligences (or at least as intelligences with the potential to become superintelligences) because they are not as versatile as human beings.
So when I am talking about super-human intelligence, I specifically mean an agent that is as good or better at humans at just about every skill set that humans possess for achieving their goals. So that would include things like not just mathematical ability or theorem proving and playing chess, but also things like social manipulation and composing music and so on, which are all functions of the brain not the kidneys
...and then...
It would be a kind of weird [organization] that was better than the best human or even the median human at all the things that humans do. [Organizations] aren’t usually the best in music and AI research and theory proving and stock markets and composing novels. And so there certainly are [Organizations] that are better than median humans at certain things, like digging oil wells, but I don’t think there are [Organizations] as good or better than humans at all things. More to the point, there is an interesting difference here because [Organizations] are made of lots of humans and so they have the sorts of limitations on activities and intelligence that humans have. For example, they are not particularly rational in the sense defined by cognitive science. And the brains of the people that make up organizations are limited to the size of skulls, whereas you can have an AI that is the size of a warehouse.
I think that Muehlhauser is slightly mistaken on a few subtle but important points. I'm going to assert my position on them without much argument because I think they are fairly sensible, but if any reader disagrees I will try to defend them in the comments.
- When judging whether an entity has intelligence, we should consider only the skills relevant to the entity's goals.
- So, if organizations are not as good at a human being at composing music, that shouldn't disqualify them from being considered broadly intelligent if that has nothing to do with their goals.
- Many organizations are quite good at AI research, or outsource their AI research to other organizations with which they are intertwined.
- The cognitive power of an organization is not limited to the size of skulls. The computational power is of many organizations is comprised of both the skulls of its members and possibly "warehouses" of digital computers.
- With the ubiquity of cloud computing, it's hard to say that a particular computational process has a static spatial bound at all.
Mean organizations
* My preferred standard of rationality is communicative rationality, a Habermasian ideal of a rationality aimed at consensus through principled communication. As a consequence, when I believe a position to be rational, I believe that it is possible and desirable to convince other rational agents of it.
Nobody brought up relativity as the issue; the fact remains that every theory is incomplete and a work in progress, and a few errors is not disproof especially for a statistical generalization. You would not apply this ultra-high standard of 'the theory must explain every observation ever in the absence of any further data or modifications' to anything else discussed on LW, and I do not understand why either you or army1987 think you are adding anything to this discussion about cities exhibiting better scaling than corporations.
I saw what sounded to me like an extraordinary claim (though it turns out I misunderstood you) so I went WTF.