Giles comments on Criticisms of intelligence explosion - Less Wrong

15 Post author: lukeprog 22 November 2011 05:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 11 December 2011 12:11:44AM 1 point [-]

See also JoshuaZ's insightful comment here on how some of the concrete problems involved in intelligence amplification are linked to to some (very likely) computationally intractable problems from CS.

Those insights are relevant and interesting for the type of self-improvement feedback loop which assumes unlimited improvement potential in algorithmic efficiency. However, there's the much more basic intelligence explosion which is just hardware driven.

Brain architecture certainly limits maximum practical intelligence, but does not determine it. Just as the relative effectiveness of current chess AI systems is limited by hardware but determined by software, human intelligence is limited by the brain but determined by acquired knowledge.

The hardware is qualitatively important only up to the point where you have something that is turing-complete. Beyond that the differences become quantitative: memory constrains program size, performance limits execution speed.

Even so, having AGI's that are 'just' at human level IQ can still quickly lead to an intelligence explosion by speeding them up by a factor of a million and then creating trillions of them. IQ is a red herring anyway. It's a baseless anthropocentric measure that doesn't scale to the performance domains of super-intelligences. If you want a hard quantitative measure, simply use standard computational measures: ie a human brain is a roughly < 10^15 circuit and at most does <10^18 circuit ops per second.