You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TrE comments on What if Strong AI is just not possible? - Less Wrong Discussion

7 Post author: listic 01 January 2014 05:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (101)

You are viewing a single comment's thread. Show more comments above.

Comment author: TrE 01 January 2014 08:02:06PM 5 points [-]

There exists a square-cube law (or something similar) so that computation becomes less and less efficient or precise or engineerable as the size of the computer or the data it processes increases, so that a hard takeoff is impossible or takes very long such that growth isn't perceived as "explosive" growth. Thus, if and when strong AI is developed, it doesn't go FOOM, and things change slowly enough that humans don't notice anything.