John_Maxwell_IV comments on Brainstorming additional AI risk reduction ideas - Less Wrong

12 Post author: John_Maxwell_IV 14 June 2012 07:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 15 June 2012 01:37:59AM 3 points [-]

Hire a High-Profile AI Researcher

SI says they've succeeded in convincing a few high-profile AI researchers that AGI research is dangerous. If one of these researchers could be hired as a SI staff member, they could lend their expertise to the development of Friendliness theories and also enhance SI's baseline credibility in the AI research community in general.

A related idea is to try to get these AI researchers to sign a petition making a statement about AI dangers.

Both of these ideas risk potentially unwanted publicity.

Note that both of these ideas are on SI's radar; I mention them here so folks can comment.

Comment author: ChristianKl 18 June 2012 08:02:15PM 1 point [-]

Could you elaborate how those ideas could lead to unwanted publicity?

Comment author: John_Maxwell_IV 19 June 2012 12:35:07AM *  0 points [-]

Having a high-profile AI researcher join SI, or a number of high-profile AI researchers express concern with AI safety, could make an interesting headline for a wide variety of audiences. It's not clear that encouraging commentary on AI safety from the general public is a good idea.