John_Maxwell_IV comments on Brainstorming additional AI risk reduction ideas - Less Wrong

12 Post author: John_Maxwell_IV 14 June 2012 07:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 14 June 2012 08:06:24AM *  6 points [-]

Publish AI Research Guidelines

The Singularity Institute has argued that the pace of friendliness research should outpace that of general purpose AI research if we want to have a positive singularity. But not all general purpose AI research is created equal. Some might be relevant to friendliness. Some might be useful in architecting a FAI. And some might only be useful for architecting a potential UFAI.

It seems possible that persuading AI researchers to change the course of their research is much easier than persuading them to quit altogether. By publishing a set of research recommendations, SI could potentially shape whatever AI research is done towards maximally Friendly ends.

Costs: Someone would have to understand all major AI research fronts in enough depth to evaluate their relevance for Friendliness, and have a good idea of the problems that need to be solved for Friendliness and what form a Friendly architecture might take. Psychological research on persuasion might additionally be beneficial. For example, is it best to explicitly identify lines of research that seem especially dangerous or just leave them out of the document altogether?

Comment author: lukeprog 17 June 2012 07:18:47PM 2 points [-]

I believe FHI is working on this, and may have it ready once Nick's book is done. I think Nick told me he plans to involve SIAI in the creation of these guidelines.