This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

Corporations can be considered superintelligent only in a limited sense. Nick Bostrom, in Superintelligence, distinguishes between "speed superintelligence", "collective superintelligence", and "quality superintelligence".

Out of these, corporations come closest to collective superintelligence. Bostrom reserves the term “collective superintelligence” for hypothetical systems much more powerful than current human groups, but corporations are still strong examples of collective intelligence. They can perform cognitive tasks far beyond the abilities of any one human, as long as those tasks can be decomposed into many parallel, human-sized pieces. For example, they can design every part of a smartphone, or sell coffee in thousands of places simultaneously.

However, corporations are still very limited. They don't have speed superintelligence: no matter how many humans work together, they'll never program an operating system in one minute, or play great chess in one second per move. Nor do they have quality superintelligence: ten thousand average physicists collaborating to invent general relativity for the first time would probably fail where Einstein succeeded. Einstein was thinking on a qualitatively higher level.

AI systems could be created one day that think exceptional thoughts at high speeds in great numbers, presenting major challenges we’ve never had to face when dealing with corporations.

New Comment
3 comments, sorted by Click to highlight new comments since:

There's also Eliezer's Arbital writeup on corporations vs superintelligences.

Yeah! It's much more in-depth than our article. We were thinking we should re-write ours to give the quick run down of EY's and then link to it.

I think it's worth distinguishing between what I'll call "parallel SI" vs "collective SI".

Parallel SI is when you have something more intelligent because it has a lot of the same intelligence in parallel. Strictly parallel SI would need to rely on random differences in decisions and shelling points since communication between threads would not be possible.

Collective SI requires parallel SI, but additionally has organization of the work being done by each intelligence. I think how far this concept can be pushed is unclear, but I don't see any reason that sufficiently clever organization of human level intelligence couldn't achieve depth or even speed SI.

The idea is that the interaction of many human level intelligences can be made to emulate the mind of a greater intelligence. This means the evolved organizational structure found in corporations could potentially be SI in ways you don't get just from parallel SI.

Curated and popular this week