You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Alsadius comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong Discussion

8 Post author: lukeprog 04 March 2012 06:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 23 April 2012 11:11:41AM 0 points [-]

One important fact I haven't been mentioning: OAI help tremendously with medium speed takeoffs (fast takeoffs are dangerous for the usual reasons, slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous), because we can then use them to experiment.

There may be, if the right thing to do is to instead push them to not build an AGI at all.

Interacting with AGI people at the moment (organising a jointish conference), will have a clearer idea of how they react to these ideas at a later stage.

Comment author: Vladimir_Nesov 23 April 2012 11:57:12AM *  0 points [-]

slow takeoffs mean that we will have moved beyond OAIs by the time the intelligence level hits dangerous

Moved where/how? Slow takeoff means we have more time, but I don't see how it changes the nature of the problem. Low time to WBE makes (not particularly plausible) slow takeoff similar to the (moderately likely) failure to develop AGI before WBE.