JoshuaZ comments on Criticisms of intelligence explosion - Less Wrong

15 Post author: lukeprog 22 November 2011 05:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 26 November 2011 08:22:34PM 0 points [-]

Why oh why do you still believe this? In my mind, this is strongly analogous to pointing out that there are physical limits on how intelligent an AI can get, which is true, but for all practical purposes irrelevant, since these limits are way above what humans can do, given our state of knowledge.

This is not a good analogy at all. The probable scale of difference is what matters here. In this sort of context, we're extremely far from physical limitations mattering, as one can see for example by the fact that Koomey's law can continue for about forty years before hitting physical limits. (It will likely break down before then but that's not the point.) In contrast, our understanding of the limits of computational complexity are in some respects stricter but weaker in other respects. The conjectured limits of for example strong versions of the exponential time hypothesis place much more severe limits on what can occur.

It is important to note here that these sorts of limits are relevant primarily in the context of a software only or primarily software only recursive self-improvement. For essentially the reasons you outline (the large amount of apparent room for physical improvement), it seems likely that this will not matter much for an AGI that has much in the way of ability to discover/construct new physical systems. (This does imply some limits in that form, but they are likely to be comparatively weak).