Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on AI safety: three human problems and one AI issue - Less Wrong Discussion

9 Post author: Stuart_Armstrong 19 May 2017 10:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread.

Comment author: turchin 19 May 2017 06:45:25PM 1 point [-]

If we create AI around human upload, or a model of human mind, it solves some of the problems:

1) It will, by definition, have the same values and the same value structure as a human being; in short, – human uploading solves value uploading.

2) It will be also not an agent

3) We could predict human upload behaviour based on our experience with predicting human behaviour.

And it will be not very powerful or very capable to strong self-improvement, because of the messy internal structure.

However, it could still be above human level because of acceleration of hardware and some tweaking. Using it we could construct primitive AI Police or AI Nanny, which will prevent the creation of any other types of AIs.

Comment author: Stuart_Armstrong 20 May 2017 08:08:56AM 1 point [-]

Convergent instrumental goals would make agent-like things become agents if they can self-modify (humans can't do this to any strong extent).

Comment author: turchin 20 May 2017 08:30:19AM 0 points [-]

If we make a model of a specific human, – for example, morally sane and rationally educated person with an excellent understanding of all said above, he could choose the right level self-improving, as he will understand dangers of becoming too much instrumental goals orientated agent. I don't know any such person in real life, btw.