You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

John_Maxwell_IV comments on Why AI may not foom - Less Wrong Discussion

23 Post author: John_Maxwell_IV 24 March 2013 08:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread.

Comment author: John_Maxwell_IV 23 March 2013 10:39:01PM 3 points [-]

I wanted to talk a bit more about what biology may or may not tell us about the ease of AGI.

This OB post discusses the importance of brain hardware differences in intelligence. One of the papers mentioned writes:

It remains open whether humans have truly unique cognitive properties. Experts recognize aspects of imitation, theory of mind, grammatical–syntactical language and consciousness in non-human primates and other large-brained mammals. This would mean that the outstanding intelligence of humans results not so much from qualitative differences, but from a combination and improvement of these abilities.

It seems plausible to me that the key software innovations for general intelligence appeared long before the evolution of humans, and humans mainly put a record-breaking number of densely packed neurons behind them. Speaking extremely speculatively, it might be that the algorithms used in human cognition get additional layers of abstraction capability (in some form or another) from additional brain hardware. This has interesting implications for throwing more hardware behind a working AGI if the AGI's algorithms share this characteristic.