Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
What is this “superintelligence” we are concerned about? In writing articles on FAI topics, I took the easy way out and defined the focus of attention as an AI that can far outdo humans in all areas. But this just a useful shortcut, not what we are really talking about.
In this essay, I will try to better rcharacterize the topic of interest.
Some possibilities that have been brought up include intelligences
- which are human-like,
- which are conscious,
- which can outperform humans in some or all areas,
- which can self-improve,
- or which meet a semi-formal or formal definition of intelligence or of above-human intelligence.
All these are important features in possible future AIs which we should be thinking about.But what really counts is whether an AI can outwit us when its goals are pitted against ours.
1. Human-like intelligence. We are humans, we care about human welfare; and humans are the primary intelligence which cooperates and competes with us; so human intelligence is our primary model. Machines that “think like humans” are an intuitive focus on discussions of AI; Turing took this as the basis for his practical test for intelligence
Future AIs might have exactly this type of intelligence, particularly if they are emulated brains, what Robin Hanson calls “ems.”
If human-like AI is the only AI to come, then not much will have happened: We already have seven billion humans, and a few more will simply extend economic trends. If, as Hanson describes, the ems need fewer resources than humans, then we can expect extreme economic impact. If such AI has certain differences from us humans, like the ability to self-improve, then it will fall under the other categories, as described below.