I think that superAI via uploading in inherent safe solution. https://en.wikipedia.org/wiki/Inherent_safety It also could go wrong in many ways, but it is not its default mode.
Even if it kill all humans, it will be one human which will survive.
Even if his values will evolve it will be natural evolution of human values.
As most human beings don't like to be alone, he would create new friends that is human simulations. So even worst cases are not as bad as paper clip maximiser.
It is also feasible plan which consist of many clear steps, one of them is choosing and educating right person for uploading. He should be educated in ethics, math, rationality, brain biology etc. I think he is reading LW and this comment))
This idea could be upgraded to be even more safe. One way is to upload a group of several people which will be able to control each other and also produce mutual collective intelligence.
Another idea is broke Super AI into center and circumference. In the centre we put uploaded mind of very rational human, which make important decisions and keep values, and in periphery we put many Tool AIs, which do a lot of dirty work.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?
Despair and dedicate your remaining lifespan to maximal hedonism.