lukeprog comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alex_Altair 12 November 2012 06:54:16AM 14 points [-]

I expect significant strategic insights to come from the technical work (e.g. FAI math).

Interesting point. I'm worried that, while FAI math will help us understand what is dangerous or outsourceable from our particular path, many many other paths to AGI are possible, and we won't learn from FAI math which of those other paths are dangerous or likely.

I feel like one clear winning strategy is safety promotion. It seems that almost no bad can come from promoting safety ideas among AI researchers and investors. It also seems relatively easy, in that requires only regular human skills of networking, persuasion, et cetera.

Comment author: lukeprog 12 November 2012 11:52:37AM 4 points [-]

Somehow I managed not to list AI safety promotion in the original draft! Added now.