Machine performance inside a domain (class of problems) can potentially be:
- Optimal (impossible to do better)
- Strongly superhuman (better than all humans by a significant margin)
- Weakly superhuman (better than all the humans most of the time and most of the humans all of the time)
- Par-human (performs about as well as most humans, better in some places and worse in others)
- Subhuman or infrahuman (performs worse than most humans)
A superintelligence is either 'strongly superhuman', or else at least 'optimal', across all cognitive domains. (It can't win against a human at logical tic-tac-toe, but it plays optimally there. In a real-world game of tic-tac-toe that it strongly wanted to win, it might drug or disassemble the opposing player.) I. J. Good originally used 'ultraintelligence' to denote the same concept: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever."
To say that a hypothetical agent or process is "superintelligent" will usually imply that it has all the advanced-agent properties.
Superintelligences are still bounded, at least if the character of physical law is anything like we think it is. They are (presumably) not infinitely smart, infinitely fast, all-knowing, or able to achieve every describable outcome using their available resources and options. However:
- A supernova isn't infinitely hot, but it's still pretty darned hot. "Finite" does not imply "small". You should not try to walk into a supernova using a standard flame-retardant jumpsuit after reasoning, correctly but unhelpfully, that it is only finitely hot.
- A superintelligence doesn't know everything and can't perfectly estimate every quantity. However, to say that something is "superintelligent" or superhuman/optimal in every cognitive domain should almost always imply that it is epistemically efficient relative to every human and human group. Even a superintelligence may not be able to exactly estimate the number of hydrogen atoms in the Sun, but a human shouldn't be able to say, "Oh, it will probably underestimate the number by 10% because hydrogen atoms are pretty light" - the superintelligence knows that too. For us to know better than the superintelligence is at least as implausible as our being able to predict a 20% price increase in Microsoft's stock six months in advance without any private information.
- A superintelligence is not omnipotent and can't obtain every describable outcome. But to say that it is "superintelligent" should suppose at least that it is instrumentally efficient relative to humans: We should not suppose that a superintelligence carries out any policy π0 such that a human can think of a policy π1 which would get more of the agent's utility. To put it another way, the assertion that a superintelligence optimizing for utility function U, would pursue a policy π0, is by default refuted if we observe some π1 such that, so far as we can see, E[U|π0]<E[U|π1].
If we're talking about a hypothetical superintelligence, probably we're either supposing that an intelligence explosion happened, or we're talking about the relatively far future.
For the book, see Nick Bostrom's book Superintelligence.