http://reducing-suffering.org/predictions-agi-takeoff-speed-vs-years-worked-commercial-software/

New to LessWrong?

New Comment
23 comments, sorted by Click to highlight new comments since: Today at 5:09 PM

This has a very low n=16, and so presumably some strong selection biases. (Surely these are not the only people who have published thought-out opinions on the the likelihood of fooming.) Without an analysis of the reasons these people give for their views, I don't think this correlation is very interesting.

Thanks for the comment. There is some "multiple hypothesis testing" effect at play in the sense that I constructed the graph because of a hunch that I'd see a correlation of this type, based on a few salient examples that I knew about. I wouldn't have made a graph of some other comparison where I didn't expect much insight.

However, when it came to adding people, I did so purely based on whether I could clearly identify their views on the hard/soft question and years worked in industry. I'm happy to add anyone else to the graph if I can figure out the requisite data points. For instance, I wanted to add Vinge but couldn't clearly tell what x-axis value to use for him. For Kurzweil, I didn't really know what y-axis value to use.

[-][anonymous]9y10

Agree RE the low N, a wider survey would be much more informative. But if the proposition was "belief in a cortex-wide neural code" and the x axis was "years working in neuroscience", would the correlation still be uninteresting?

Obviously I'm suggesting that some of the dismissal of the correlation might be due to bias (in the LW community generally, not you specifically) in favor of a hard foom. To me, if belief in a proposition in field X varies inversely with experience in field X... well all else equal that's grounds to give the proposition a bit more scrutiny.

But if the proposition was "belief in a cortex-wide neural code" and the x axis was "years working in neuroscience", would the correlation still be uninteresting?

If there are thousands of people working in neuroscience and you present a poll of 16 of them which shows correlation between the results and how long they've been working in the field, and you leave out how you selected them and why they each think what they do and how they might respond to one another, then I wouldn't assign much credence to the results.

Obviously I'm suggesting that some of the dismissal of the correlation might be due to bias

Or the correlation might be to the pollster's bias in choosing respondents. Or (most likely) it might be accidental due to underpowered analysis.

To me, if belief in a proposition in field X varies inversely with experience in field X... well all else equal that's grounds to give the proposition a bit more scrutiny.

To be clear, I'm saying that this study is far too underpowered, underspecified and under-explained to cause me to believe that the "belief in a proposition in field X varies inversely with experience ". If I believed that, I would come to the same conclusion as you do.

[-][anonymous]9y40

Fair. The closest thing I've see to that is this http://www.sophia.de/pdf/2014_PT-AI_polls.pdf (just looking at the Top100 category and ignoring the others). And as I was writing this I remembered that I shouldn't be putting much credence into expert opinion in this field anyway https://intelligence.org/files/PredictingAI.pdf, so yes you're right this correlation doesn't say much.

I don't know if you intend this, but when I read this, I sense that the implication is that a take off will probably be soft, given that the people with the most experience think so.

However, this could be an effect of bias: the people who have spent the most time working on software projects see how hard it is (for humans), and so predict that AI improvement will be very hard (recalcitrance high). For the people who have worked the the industry, the hardness of the problem is very available, but the intelligence and optimization power of of the AI is not, since no one has seen a strong AI yet. So they extrapolate from what they know, even though this misses the point of recursive-self-improvement.

Of course, this is saying that one group has a clear grasp of one of the two relevant variables (recalcitrance and optimization power) while the other group has a clear grasp of neither variable...and it's the first group that's biased.

Thing is, with almost everything in software, one of the first things it gets applied to is... software development.

Whenever some neat tool/algorithm comes out to make analysis of code easier it gets integrated into software development tools, into languages and into libraries.

If the complexity of software stayed static then programmers would have insanely easy jobs now but the demands grow to the point where the actual percent of failed software projects stays pretty static and has done since software development became a reasonably common job.

Programmers essentially become experts in dealing with hideous complex systems involving layers within layers of abstraction. Every few months we watch news reports about how xyz tool is going to make programmers obsolete by allowing "anyone" to create xyz and 10 years later we're getting paid to untangle the mess made by "anyone" who did indeed make xyz... badly while we were using professional equivalents of the same tools to build systems orders of magnitude larger and more complex.

If you had a near human level AI, odds are, everything that could be programmed into it at the start to help it with software development is already going to be part of the suites of tools for helping normal human programmers.

Add to that, there's nothing like working with the code for (as opposed to simply using or watching movies about) real existing modern AI to convince you that we're a long long way from any AI that's anything but an ultra-idiot savant.

And nothing like working in industry to make you realize that an ultra-idiot savant is utterly acceptable and useful.

Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)

Re: recursive self improvement, the crux is whether improvements in AI gets harder the deeper you go. There's not really good units for this.

but lets go with IQ. lets imagine that you start out with an AI like an average human. IQ 100.

If it's trivial to increase intelligence and it doesn't get harder to improve further as you get higher then ya, foom, IQ of 10,000 in no time.

If each IQ point gets exponentially harder to add then while it may have taken a day to go from 100 to 101, by the time it gets to 200 it's having to spend months scanning it's own code for optimizations and experimenting with cut-down versions of itself in order to get to 201.

Given the utterly glacial pace of AI research it doesn't seem like the former is likely.

Side note: I keep seeing a bizarre assumption (which I can only assume is a Hollywood trope) from a lot of people here that even a merely human-level AI would automatically be awesome at dealing with software just because they're made of software. (like how humans are automatically experts in advanced genetic engineering just because we're made of DNA)

Not "just because they're made of software" - but because there are many useful things that a computer is already better than a human at (notably, vastly greater "working memory"), so a human-level AI can be expected to have those and whatever humans can do now. And a programmer who could easily do things like "check all lines of code to see if they seem like they can be used", or systematically checking from where a function could be called, or "annotating" each variable, function or class by why it exists ... all things that a human programmer could do, but that either require a lot of working memory, or are mind-numblingly boring.

Good points. However, keep in mind that humans can also use software to do boring jobs that require less-than-human intelligence. If we were near human-level AI, there may by then be narrow-AI programs that help with the items you describe.

it depends how your AI is implemented, perhaps it will turn out that the first human level AI's are simply massive ANN's of some kind. Such an AI might have human equivalent working memory and have to do the equivalent of making notes outside of it's own mind just as we do.

Given how very very far we are from that level of AI we might very well see actual brain enhancements similar to this only for humans first which could leave us on a much for even footing with the AI's:

http://www.popsci.com/technology/article/2011-06/artificial-memory-chip-rats-can-remember-and-forget-touch-button

The device can mimic the brain's own neural signals, thereby serving as a surrogate for a piece of the brain associated with forming memories. If there is sufficient neural activity to trace, the device can restore memories after they have been lost. If it's used with a normal, functioning hippocampus, the device can even enhance memory.

Another way to ask the question is, assuming that IQ is the relevant measure, is there a sublinear, linear or superlinear relationship between IQ and productivity? Same question for cost of raising the IQ by one point, does it increase, decreasy or stay constant with IQ? Foom occurs for suitable combinations in this extremely simple model.

Long years in software do rigorously teach that complex things take long. Longer than you expect even including this rule. Or don't work at all. This non-proven rule-of-the-thump need not apply to self-optimizing AI but could at least be seen as an effect explaining or at least affecting the judgements of experts.

Added. I realize that this is basically a TLDR of Copla's comment.

How different would this be with age as the x axis?

Good question. :) I don't want to look up exact ages for everyone, but I would guess that this graph would look more like a teepee, since Yudkowsky, Musk, Bostrom, etc. would be shifted to the right somewhat but are still younger than the long-time software veterans.

The subset that you can get birth years off the first page of a google search of their name (n=9), has a pretty clear correlation with younger people believing in harder takeoff. (I'll update if I get time to dig out other's birth years.)

Cool. Another interesting question would be how the views of a single person change over time. This would help tease out whether it's a generational trend or a generic trend with getting older.

In my own case, I only switched to finding a soft takeoff pretty likely within the last year. The change happened as I read more sources outside LessWrong that made some compelling points. (Note that I still agree that work on AI risks may have somewhat more impact in hard-takeoff scenarios, so that hard takeoffs deserve more than their probability's fraction of attention.)

Birth Year vs Foom:

A bit less striking than the famous enough to have Google pop up their birth year subset (green).

This is awesome! Thank you. :) I'd be glad to copy it into my piece if I have your permission. For now I've just linked to it.

Consider it to be public domain.

If you pull the image from it's current location and message me when you add more folks I might even update it. Or I can send you my data if you want to go for a more consistency.

[-][anonymous]9y30

Something feels very,very wrong that Elon Musk is on the left-hand side of the chart, and Ben Goertzel on the right. I'd reckon that Elon Musk is a more reliable source about the timelines of engineering projects in general (with no offense meant to Goertzel). Maybe this axis isn't measuring the right thing?

This is a good point, and I added it to the penultimate paragraph of the "Caveats" section of the piece.

[-][anonymous]9y10

That wasn't really the point I was getting at (commercial vs academic). The point was more that there is a skill having to do with planning and execution of plans which people like Elon Musk demonstrably have, which makes their predictions carry significant weight. Elon Musk has been very, very successful in many different industries (certificate authorities, payment services, solar powered homes, electric cars, space transportation) by making controversial / not obvious decisions about the developmental trajectory of new technology, and being proven right in pretty much every case. Goertzel has also founded AI companies (Webmind, Novamente) based on his own predicted trajectories, and ran these businesses into the ground[1]. But Goertzel, having worked with computer tech this whole time, is ranked higher than Musk in terms of experience on your chart. That seems odd, to say the least.

[1] http://www.goertzel.org/benzine/WakingUpFromTheEconomyOfDreams.htm

(Again, I don't want this to sound like a slight against Geortzel. He's one of the AGI researchers I respect the most, even if his market timing and predicted timelines have been off. For example, Webmind and Google started around the same time, and Webmind's portfolio of commercial products was basically the same -- search, classification -- and its general R&D interests are basically aligned with Google post-2006. Google of today is what Webmind was trying to be in 1999 - 2001. If you took someone from mid 2000 and showed them a description of today's Google with names redacted, they'd be excused for thinking it was Webmind, not Google. Execution and near-term focus matters. :\ )

[-][anonymous]9y00

Do you have any examples of people adjusting their beliefs over time? When dealing with small populations I think that within-subject comparisons are really the only way to go.