You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong Discussion

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 19 April 2012 11:28:38AM *  1 point [-]

Together with Wei's point that OAI doesn't seem to help much, there is the downside that existence of OAI safety guidelines might make it harder to argue against pushing AGI in general. So on net it's plausible that this might be a bad idea, which argues for weighing this tradeoff more carefully.

Comment author: Stuart_Armstrong 20 April 2012 08:38:33AM 1 point [-]

there is the downside that existence of OAI safety guidelines might make it harder to argue against pushing AGI in general.

Possibly. But in my experience even getting the AGI people to admit that there might be safety issues is over 90% of the battle.

Comment author: Vladimir_Nesov 20 April 2012 10:44:06AM *  0 points [-]

It's useful for AGI researchers to notice that there are safety issues, but not useful for them to notice that there are "safety issues" which can be dealt with by following OAI guidelines. The latter kind of understanding might be worse than none at all, as it seemingly resolves the problem. So it's not clear to me that getting people to "admit that there might be safety issues" is in itself a worthwhile milestone.