What do you think of CEV as a proposal?
I read the book several times already and it makes me more and more pessimistic. Even if we make SI to follow CEV, at some point it might decide to drop it. Its SI above all, it can find ways to do anything. Yet we can't survive without SI. So SEV proposal is as good and as bad as any other proposal. My only hope is that moral values could be as fundamental as laws of nature. So a very superintelligent AI will be very moral. Then we'll be saved. If not, then it will create Hell for all people and keep them there for eternity (meaning that even death could be a better way out yet SI will not let people die). What should we do?
Was the analogy to horses good?
It's really good. People are superintelligence to horses, and they (horses) lost 95% of jobs. With SI to people, people will loose no less % of jobs. We have to take it as something provably coming. It will be painful but necessary change. So many people spend their lives on so simple jobs (like cleaning, selling etc).
Do you think a multipolar outcome is more or less likely than a singleton scenario?
Unless somebody specifically pushes for multipolar scenario its unlikely to arise spontaneously. With our military-oriented psychology any SI will be first considered for military purposes, including prevention of SI achievement by others. However, a smart group of people or organizations might purposefully multiply instances of near-ready SI in order to create competition which can increase our chances of survival. Creating social structure of SIs might make them socially aware and tolerant, which might include tolerance to people.
Maybe people shouldn't make Superintelligence at all? Narrow AIs are just fine if you consider the progress so far. Self-driving cars will be good, then applications using Big Data will find cures for most illnesses, then solve starvation and other problems by 3D printing foods and everything else, including rockets to deflect asteroids. Just give 10-20 more years only. Why to create dangerous SI?
Yes. "Make 10 paperclips and then do nothing, without killing people or otherwise disturbing or destroying the world, or in any way preventing it from going on as usual."
There is simply no way to give this a perverse instantiation; any perverse instantiation would prevent the world from going on as usual. If the AI cannot correctly understand "without killing... disturbing or destroying.. preventing it from going on as usual", then there is no reason to think it can correctly understand "make 10 paperclips."
I realize that in reality an AI's original goals are not specified in English. But if you know how to specify "make 10 paperclips", whether in English or not, you should know how to specify the rest of this.
Before "then do nothing" AI might exhaust all matter in Universe trying to prove that it made exactly 10 paperclips.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Why?
Because we are not SI, so we don't know what it will do and why. It might.