This post is mostly concerned with a superintelligent AI performing recursive self-improvement, this analysis is done to help make sense of the take off speed of such an operation.
Plausibility and Limits
Before considering upper limits, it may be worth considering whether general superintelligence is possible at all. It has been suggested that the idea of recursive self-improvement is similar to the infamous concept of a "perpetual motion machine". We know that a p.m.m. is impossible because it violates thermodynamics. Is there an analogous proof or argument that shows recursive self-improvement impossible? Some good places to start looking for hard limits on superintelligence are mathematics, computability and physics. It's also useful to think about... (read 1395 more words →)