Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

HungryHobo comments on Inverse relationship between belief in foom and years worked in commercial software - Less Wrong Discussion

5 Post author: NancyLebovitz 04 January 2015 03:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: HungryHobo 04 January 2015 10:15:12PM *  12 points [-]

Thing is, with almost everything in software, one of the first things it gets applied to is... software development.

Whenever some neat tool/algorithm comes out to make analysis of code easier it gets integrated into software development tools, into languages and into libraries.

If the complexity of software stayed static then programmers would have insanely easy jobs now but the demands grow to the point where the actual percent of failed software projects stays pretty static and has done since software development became a reasonably common job.

Programmers essentially become experts in dealing with hideous complex systems involving layers within layers of abstraction. Every few months we watch news reports about how xyz tool is going to make programmers obsolete by allowing "anyone" to create xyz and 10 years later we're getting paid to untangle the mess made by "anyone" who did indeed make xyz... badly while we were using professional equivalents of the same tools to build systems orders of magnitude larger and more complex.

If you had a near human level AI, odds are, everything that could be programmed into it at the start to help it with software development is already going to be part of the suites of tools for helping normal human programmers.

Add to that, there's nothing like working with the code for (as opposed to simply using or watching movies about) real existing modern AI to convince you that we're a long long way from any AI that's anything but an ultra-idiot savant.

And nothing like working in industry to make you realize that an ultra-idiot savant is utterly acceptable and useful.

Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)

Re: recursive self improvement, the crux is whether improvements in AI gets harder the deeper you go. There's not really good units for this.

but lets go with IQ. lets imagine that you start out with an AI like an average human. IQ 100.

If it's trivial to increase intelligence and it doesn't get harder to improve further as you get higher then ya, foom, IQ of 10,000 in no time.

If each IQ point gets exponentially harder to add then while it may have taken a day to go from 100 to 101, by the time it gets to 200 it's having to spend months scanning it's own code for optimizations and experimenting with cut-down versions of itself in order to get to 201.

Given the utterly glacial pace of AI research it doesn't seem like the former is likely.

Comment author: Emile 05 January 2015 12:52:36PM 4 points [-]

Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)

Not "just because they're made of software" - but because there are many useful things that a computer is already better than a human at (notably, vastly greater "working memory"), so a human-level AI can be expected to have those and whatever humans can do now. And a programmer who could easily do things like "check all lines of code to see if they seem like they can be used", or systematically checking from where a function could be called, or "annotating" each variable, function or class by why it exists ... all things that a human programmer could do, but that either require a lot of working memory, or are mind-numblingly boring.

Comment author: Brian_Tomasik 07 January 2015 09:06:17PM 0 points [-]

Good points. However, keep in mind that humans can also use software to do boring jobs that require less-than-human intelligence. If we were near human-level AI, there may by then be narrow-AI programs that help with the items you describe.

Comment author: HungryHobo 06 January 2015 06:45:21PM *  0 points [-]

it depends how your AI is implemented, perhaps it will turn out that the first human level AI's are simply massive ANN's of some kind. Such an AI might have human equivalent working memory and have to do the equivalent of making notes outside of it's own mind just as we do.

Given how very very far we are from that level of AI we might very well see actual brain enhancements similar to this only for humans first which could leave us on a much for even footing with the AI's:

http://www.popsci.com/technology/article/2011-06/artificial-memory-chip-rats-can-remember-and-forget-touch-button

The device can mimic the brain's own neural signals, thereby serving as a surrogate for a piece of the brain associated with forming memories. If there is sufficient neural activity to trace, the device can restore memories after they have been lost. If it's used with a normal, functioning hippocampus, the device can even enhance memory.

Comment author: Metus 05 January 2015 01:59:23AM 1 point [-]

Another way to ask the question is, assuming that IQ is the relevant measure, is there a sublinear, linear or superlinear relationship between IQ and productivity? Same question for cost of raising the IQ by one point, does it increase, decreasy or stay constant with IQ? Foom occurs for suitable combinations in this extremely simple model.