You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mark_Friedenbach comments on Superintelligence 16: Tool AIs - Less Wrong Discussion

7 Post author: KatjaGrace 30 December 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 10 January 2015 12:23:50PM 1 point [-]

First of all, those two assumptions of humans being slow and rare compared to artificial intelligence are dubious. Humans are slow at some things, but fast at others. If the architecture of the AGI differs substantially from the way humans think, it is very likely that the AGI would not be very fast at doing some things humans find easy. And early human-level AGIs are likely to consume vast supercomputing resources; they're not going to be cheap and plentiful.

But beyond that, the time frame for using tool AI may be very short, e.g. on the order of 10 years or so. There isn't a danger of long-term instability here.