You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Benito comments on Superintelligence 6: Intelligence explosion kinetics - Less Wrong Discussion

9 Post author: KatjaGrace 21 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: Benito 19 August 2015 10:32:42AM 1 point [-]

I don't think that the point Bostrom is making hangs on this timeline of updates; the point is simply that, if you take an AGI to human level through purely improvement to qualitative intelligence, it will be super intelligent immediately. This point is important regardless of timeline; if you have an AGI that is low on quality intelligence but has these other resources, it may work to improve its quality intelligence. At the point that it's quality is equivalent to a human, it will be beyond a human in ability and competence.

Perhaps this is all an intuition pump to appreciate the implications of a general intelligence on a machine.