cata comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: kodos96 13 August 2010 08:47:35AM 3 points [-]

The only part of the chain of logic that I don't fully grok is the "FOOM" part. Specifically, the recursive self improvement. My intuition tells me that an AGI trying to improve itself by rewriting its own code would encounter diminishing returns after a point - after all, there would seem to be a theoretical minimum number of instructions necessary to implement an ideal Bayesian reasoner. Once the AGI has optimized its code down to that point, what further improvements can it do (in software)? Come up with something better than Bayesianism?

Now in your summary here, you seem to downplay the recursive self-improvement part, implying that it would 'help,' but isn't strictly necessary. But my impression from reading Eliezer was that he considers it an integral part of the thesis - as it would seem to be to me as well. Because if the intelligence explosion isn't coming from software self-improvement, then where is it coming from? Moore's Law? That isn't fast enough for a "FOOM", even if intelligence scaled linearly with the hardware you threw at it, which my intuition tells me it probably wouldn't.

Now of course this is all just intuition - I haven't done the math, or even put a lot of thought into it. It's just something that doesn't seem obvious to me, and I've never heard a compelling explanation to convince me my intuition is wrong.

Comment author: cata 13 August 2010 07:00:35PM 2 points [-]

I think the widespread opinion is that the human brain has relatively inefficient hardware -- I don't have a cite for this -- and, most likely, inefficient software as well (it doesn't seem like evolution is likely to have optimized general intelligence very well in the relatively short timeframe that we have had it at all, and we don't seem to be able to efficiently and consistently channel all of our intelligence into rational thought.)

That being the case, if we were going to write an AI that was capable of self-improvement on hardware that was roughly as powerful or more powerful than the human brain (which seems likely) it stands to reason that it could potentially be much faster and more effective than the human brain; and self-improvement should move it quickly in that direction.