quartz comments on Reframing the Problem of AI Progress - Less Wrong

21 Post author: Wei_Dai 12 April 2012 07:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: quartz 13 April 2012 10:13:43PM 1 point [-]

These are interesting suggestions, but they don't exactly address the problem I was getting at: leaving a line of retreat for the typical AI researcher who comes to believe that his work likely contributes to harm.

My anecdotal impression is that the number of younger researchers who take arguments for AI risk seriously has grown substantially in the last years, but - apart from spreading the arguments and the option of career change - it is not clear how this knowledge should affect their actions.

If the risk of indifferent AI is to be averted, I expect that a gradual shift in what is considered important work is necessary in the minds of the AI community. The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work - in a way that makes use of their existing skillset and doesn't kill their careers.

Comment author: Wei_Dai 13 April 2012 10:35:49PM 1 point [-]

Ok, I had completely missed what you were getting at, and instead interpreted your comment as saying that there's not much point in coming up with better arguments, since we can't expect AI researchers to change their behaviors anyway.

The most viable path I see towards such a shift involves giving individual researchers an option to express their change in beliefs in their work - in a way that makes use of their existing skillset and doesn't kill their careers.

This seems like a hard problem, but certainly worth thinking about.