soreff comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: soreff 15 August 2010 02:24:31AM *  5 points [-]

I think it at least possible that much-smarter-than human intelligence might turn out to be impossible. There exist some problem domains where there appear to be a large number of solutions, but where the quality of the solutions saturate quickly as more and more resources are thrown at them. A toy example is how often records are broken in a continuous 1-D domain, with attempts drawn from a constant probability distribution: The number of records broken goes as the log of the number of attempts. If some of the tasks an AGI must solve are like this, then it might not do much better than humans - not because evolution did a wonderful job of optimizing humans for perfect intelligence, but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.

One (admittedly weak) piece of evidence: a real example of saturation, is an optimizing compiler being used to recompile itself. It is a recursive optimizing system, and, if there is a knob to allow more effort being used on the optimization, the speed-up from the first pass can be used to allow a bit more effort to be applied to a second pass for the same cpu time. Nonetheless, the results for this specific recursion are not FOOM.

The evidence in the other direction are basically existence proofs from the most intelligent people or groups of people that we know of. Something as intelligent as Einstein must be possible, since Einstein existed. Given an AI Einstein, working on improving its own intelligence - it isn't clear if it could make a little progress or a great deal.

Comment author: gwern 15 August 2010 08:18:05AM 3 points [-]

but because that part of the problem domain is a brick wall, and anything must bash into it at nearly the same point.

This goes for your compilers as well, doesn't it? There are still major speed-ups available in compilation technology (the closely connected areas of whole-program compilation+partial evaluation+supercompilation), but a compiler is still expected to produce isomorphic code, and that puts hard information-theoretic bounds on output.