paulfchristiano comments on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (232)
Bostrom's wonderful book lays out many important issues and frames a lot of research questions which it is up to all of us to answer.
Thanks to Katja for her introduction and all of these good links.
One issue that I would like to highlight: The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss.
For this reason, in the next phase of this work, we have to understand what specific future technologies could lead us to what specific outcomes.
Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous.
Meanwhile, the intelligence of a single person, even a single genius, taken in isolation and only allowed to acquire limited resources actually is not all that dangerous. People become dangerous when they form groups, access the existing corpus of human knowledge, coordinate among each other to deploy resources and find ways to augment their abilities.
"Human-level intelligence" is only a first-order approximation to the set of skills and abilities which should concern us.
If we want to prevent disaster, we have to be able to distinguish dangerous systems. Unfortunately, checking whether a machine can do all of the things a person can is not the correct test.
While I broadly agree with this sentiment, I would like to disagree with this point.
I would consider even the creation of a single very smart human, with all human resourcefulness but completely alien values, to be a significant net loss to the world. If they represent 0.001% of the world's aggregative productive capacity, I would expect this to make the world something like 0.001% worse (according to humane values) and 0.001% better (according to their alien values).
The situation is not quite so dire, if nothing else because of gains for trade (if our values aren't in perfect tension) and the ability of the majority to stomp out the values of a minority if it is so inclined. But it's in the right ballpark.
So while I would agree that broadly human capabilities are not a necessary condition for concern, I do consider them a sufficient condition for concern.