timtyler comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (187)
Really? It seems to me as though software companies do this all the time. Think about Eclipse, for instance. The developers of Eclipse use Eclipse to program Eclipse with. Improvements to it help them make further improvements directly.
So, the recursive self-improvement is a matter of degree? It sounds as though you now agree.
It's like the post here: http://lesswrong.com/lw/w5/cascades_cycles_insight/
It's highly unlikely a company will be able to get >1.
To me, that just sounds like confusion about the relationship between genetic and psychological evolution.
Um > 1 what. It's easy to make irrefutable predictions when what you say is vague and meaningless.
The point of the article is that if the recursion can work on itself more than a certain amount, then each new insight allows for more insights, as in the case of uranium for a nuclear bomb. > 1 refers to the average amount of improvement that an AGI that is foom-ing can gain from an insight.
What I was trying to say is the factor for corporations is much less than 1, which makes it different from an AGI. (To see this effect, try plugging in .9^x in a calculator, then 1.1^x)