Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Thomas comments on AGI Safety Solutions Map - Less Wrong

10 Post author: turchin 21 July 2015 02:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread.

Comment author: Thomas 21 July 2015 03:27:42PM 3 points [-]

The natural "philosophical landmines" already work with at least some people. They put some or even all there resources into something they can't possibly achieve. Newton and "the problem of Trinity", for example.

Comment author: turchin 21 July 2015 03:39:24PM 0 points [-]

One may joke that the idea of creation of Friendly AI is one of the same class of landmines (hope not) :) Perpetuum mobile certainly is.

Comment author: Thomas 21 July 2015 04:35:51PM 4 points [-]

Human never knows, if it is a landmine or just a very difficult task. So does not know the AI.

I have an advice against those landmines, though. Do not use all of your time for something which has not been solved for a long time. Also decrease your devotion with time.

I suspect, there were brilliant mathematicians in the past, who spent their entire life to Goldbach's conjecture or something of that kind. That's why we have never heard of them, this "landmine" rendered them obscure. Had they choose something lighter (or even possible) to solve, they could be famous.