You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kaj_Sotala comments on Breakdown of existential risks - Less Wrong Discussion

16 Post author: Stuart_Armstrong 23 November 2012 02:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 23 November 2012 04:11:25PM 5 points [-]

E.g. AI regulation (like most technology regulation) is only effective if you get the whole world on board, and without global coordination there's the potential for arms races.

"Only develop an FAI" also presumes a hard takeoff, and it's not exactly established beyond all doubt that we'll have one.