Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Superintelligence via whole brain emulation - Less Wrong Discussion

8 Post author: AlexMennen 17 August 2016 04:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread.

Comment author: turchin 17 August 2016 11:25:25AM 0 points [-]

I think that superAI via uploading in inherent safe solution. https://en.wikipedia.org/wiki/Inherent_safety It also could go wrong in many ways, but it is not its default mode.

Even if it kill all humans, it will be one human which will survive.

Even if his values will evolve it will be natural evolution of human values.

As most human beings don't like to be alone, he would create new friends that is human simulations. So even worst cases are not as bad as paper clip maximiser.

It is also feasible plan which consist of many clear steps, one of them is choosing and educating right person for uploading. He should be educated in ethics, math, rationality, brain biology etc. I think he is reading LW and this comment))

This idea could be upgraded to be even more safe. One way is to upload a group of several people which will be able to control each other and also produce mutual collective intelligence.

Another idea is broke Super AI into center and circumference. In the centre we put uploaded mind of very rational human, which make important decisions and keep values, and in periphery we put many Tool AIs, which do a lot of dirty work.

Comment author: ZankerH 17 August 2016 01:29:39PM 1 point [-]

Even if it kill all humans, it will be one human which will survive.

Unless it self-modifies to the point where you're stretching any meaningful definition of "human".

Even if his values will evolve it will be natural evolution of human values.

Again, for sufficiently broad definitions of "natural evolution".

As most human beings don't like to be alone, he would create new friends that is human simulations. So even worst cases are not as bad as paper clip maximiser.

If we're to believe Hanson, the first (and possibly only) wave of human em templates will be the most introvert workaholics we can find.

Comment author: turchin 17 August 2016 05:42:06PM *  0 points [-]

Unless it self-modifies to the point where you're stretching any meaningful definition of "human"

Its evolution could go wrong from our point of view, but older generation always thinks that younger ones are complete bastards. When I say "natural evolution" I meant complex evolution of values based on their previous state and new experience, and it is rather typical situation for any human being who's values are evolving from childhood, and also under influence of experiences, texts and social circle.

This idea is very different from Hanson's em world. Here we deliberately upload only one human, who is trained to become are a core of future friendly AI. He knows that he is going to make some self-improvements but he knows dangers of unlimited self-improvement. His loved ones are still in flesh. He is trained to be not a slave as in the Hanson's Em world, but a wise ruler.