Thing is, with almost everything in software, one of the first things it gets applied to is... software development.
Whenever some neat tool/algorithm comes out to make analysis of code easier it gets integrated into software development tools, into languages and into libraries.
If the complexity of software stayed static then programmers would have insanely easy jobs now but the demands grow to the point where the actual percent of failed software projects stays pretty static and has done since software development became a reasonably common job.
Programmers essentially become experts in dealing with hideous complex systems involving layers within layers of abstraction. Every few months we watch news reports about how xyz tool is going to make programmers obsolete by allowing "anyone" to create xyz and 10 years later we're getting paid to untangle the mess made by "anyone" who did indeed make xyz... badly while we were using professional equivalents of the same tools to build systems orders of magnitude larger and more complex.
If you had a near human level AI, odds are, everything that could be programmed into it at the start to help it with software development is already going to be part of the suites of tools for helping normal human programmers.
Add to that, there's nothing like working with the code for (as opposed to simply using or watching movies about) real existing modern AI to convince you that we're a long long way from any AI that's anything but an ultra-idiot savant.
And nothing like working in industry to make you realize that an ultra-idiot savant is utterly acceptable and useful.
Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)
Re: recursive self improvement, the crux is whether improvements in AI gets harder the deeper you go. There's not really good units for this.
but lets go with IQ. lets imagine that you start out with an AI like an average human. IQ 100.
If it's trivial to increase intelligence and it doesn't get harder to improve further as you get higher then ya, foom, IQ of 10,000 in no time.
If each IQ point gets exponentially harder to add then while it may have taken a day to go from 100 to 101, by the time it gets to 200 it's having to spend months scanning it's own code for optimizations and experimenting with cut-down versions of itself in order to get to 201.
Given the utterly glacial pace of AI research it doesn't seem like the former is likely.
Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)
Not "just because they're made of software" - but because there are many useful things that a computer is already better than a human at (notably, vastly greater "working memory"), so a...
http://reducing-suffering.org/predictions-agi-takeoff-speed-vs-years-worked-commercial-software/