timtyler comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 10 September 2010 09:15:10PM *  1 point [-]

From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson's The Singularity is Far.

I don't think there's any such consensus. Most of those involved know that they don't know with very much confidence. For a range of estimates, see the bottom of:

http://alife.co.uk/essays/how_long_before_superintelligence/

Comment author: multifoliaterose 10 September 2010 10:24:33PM 2 points [-]

For what it's worth, in saying "way out of reach" I didn't mean "chronologically far away," I meant "far beyond the capacity of all present researchers." I think it's quite possible that AGI is just 50 years away.

I think that the absence of plausibly relevant and concrete directions for AGI/FAI research, the chance of having any impact on the creation of an FAI through research is diminished by many orders of magnitude.

If there are plausibly relevant and concrete directions for AGI/FAI research then the situation is different, but I haven't heard examples that I find compelling.

Comment author: timtyler 11 September 2010 09:42:26AM *  1 point [-]

"Just 50 years?" Shane Legg's explanation of why his mode is at 2025:

http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/

If 15 years is more accurate - then things are a bit different.

Comment author: multifoliaterose 11 September 2010 01:59:45PM *  1 point [-]

"Just 50 years?" Shane Legg's explanation of why his mode is at 2025:

Thanks for pointing this out. I don't have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.

I'd recur to CarlShulman's remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.

If 15 years is more accurate - then things are a bit different.

I agree. There's still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in "crunch" mode (amassing resources specifically directed at future FAI research).

Comment author: Will_Newsome 14 September 2010 02:28:53AM *  1 point [-]

I agree. There's still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in "crunch" mode (amassing resources specifically directed at future FAI research).

At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we're still in serious crunch time. The tails are fat in both directions. (This is important because it takes away a lot of the Pascalian flavoring that makes people (justifiably) nervous when reasoning about whether or not to donate to FAI projects: 15% chance of FOOM before 2020 just feels very different to a bounded rationalist than a .5% chance of FOOM before 2020.)

Thanks for pointing this out. I don't have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.

For what it's worth, Shane Legg is a pretty reasonable fellow who understands that AGI isn't automatically good, so we can at least rule out that his predictions are tainted by the thoughts of "Yay, technology is good, AGI is close!" that tend to cast doubt on the lack of bias in most AGI researchers' and futurists' predictions. He's familiar with the field and indeed wrote the book on Machine Super Intelligence. I'm more persuaded by Legg's arguments than most at SIAI, though, and although this isn't a claim that is easily backed by evidence, the people at SIAI are really freakin' good thinkers and are not to be disagreed with lightly.

Comment author: multifoliaterose 14 September 2010 05:42:38AM 0 points [-]

At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we're still in serious crunch time. The tails are fat in both directions.

I recur to my concern about selection effects. If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?

I do think that it's sufficiently likely that the people in academia have erred that it's worth my learning more about this topic and spending some time pressing people within academia on this point. But at present I assign a low probability (~5%) to the notion that the mainstream has missed something so striking as a large probability of a superhuman AI within 15 years.

Incidentally, I do think that decisive paradigm changing events are very likely to occur over the next 200 years and that this warrants focused effort on making sure that society is running as possible (as opposed to doing pure scientific research with the justification that it may pay off in 500 years).

Comment author: Will_Newsome 14 September 2010 06:14:08AM 1 point [-]

A fair response to this requires a post that Less Wrong desperately needs to read: People Are Crazy, the World Is Mad. Unfortunately this requires that I convince Michael Vassar or Tom McCabe to write it. Thus, I am now on a mission to enlist the great power of Thomas McCabe.

(A not-so-fair response: you underestimate the extent to which academia is batshit insane just like nearly every individual in it, you overestimate the extent to which scientists ever look outside of their tiny fields of specialization, you overestimate the extent to which the most rational scientists are willing to put their reputations on the line by even considering much less accepting an idea as seemingly kooky as 'human-level AI by 2035', and you underestimate the extent to which the most rational scientists are starting to look at the possibility of AGI in the next 50 years (which amounts to non-trivial probability mass in the next 15). I guess I don't know who the very best scientists are. (Dawkins and Tooby/Cosmides impress me a lot; Tooby was at the Summit. He signed a book that's on my table top. :D ) Basically, I think you're giving academia too much credit. These are all assertions, though; like I said, this response is not a fair one, but this way at least you can watch for a majoritarian bias in your thinking and a contrarian bias in my arguments.)

Comment author: multifoliaterose 14 September 2010 06:39:22AM 0 points [-]

I look forward to the hypothetical post.

As for your "not-so-fair response" - I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.

(I say this with all due respect - I've read and admired some of your top level posts.)

Comment author: Will_Newsome 14 September 2010 06:54:01AM 1 point [-]

As for your "not-so-fair response" - I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.

I definitely don't have the necessary first-hand-experience: I was reporting second-hand the impressions of a few people who I respect but whose insights I've yet to verify. Sorry, I should have said that. I deserve some amount of shame for my lack of epistemic hygiene there.

(I say this with all due respect - I've read and admired some of your top level posts.)

Thanks! I really appreciate it. A big reason for the large amounts of comments I've been barfing up lately is a desire to improve my writing ability such that I'll be able to make more and better posts in the future.

Comment author: jacob_cannell 15 September 2010 01:47:13AM *  0 points [-]

If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?

How do you support this? Have you done a poll of mainstream scientists (or better yet - the 'best' ones)? I haven't seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50/50. It's also important to note that the IEEE editor was against the Singularity-hypothesis - if I remember correctly, so there may be some bias there.

And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?

I'd actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct - AGI will arrive around when Moore's law makes it so.

200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap - not a general poll of scientists.

The semiconductor industry predicts it's own future pretty accurately, but they don't invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore's law in general is the most relevant for predicting AGI.

I base my own internal estimate on my own knowledge of the relevant fields - partly because this is so interesting and important that one should spend time investigating it.

I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.

If you are a materialist then intelligence is just another algorithm - something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.

Comment author: multifoliaterose 15 September 2010 02:01:52AM 2 points [-]

How do you support this? Have you done a poll of mainstream scientists (or better yet - the 'best' ones)?

I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.

I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there's a significant probability that we'll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.

I haven't seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50/50. It's also important to note that the IEEE editor was against the Singularity-hypothesis - if I remember correctly, so there may be some bias there.

Can you give a reference?

I'd actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct - AGI will arrive around when Moore's law makes it so.

This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.

200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap - not a general poll of scientists.

I have sufficiently little subject matter knowledge so that it's reasonable for me to take the outside view here and listen to people who seem to know what they're talking about rather than attempting to do a detailed analysis myself.

Comment author: jacob_cannell 15 September 2010 02:31:10AM *  1 point [-]

Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.

Yes, from my reading of Shane Legg I think his prediction is a reasonable inside view and close to my own. But keep in mind it is also something of popular view. Kurzweil's latest tome was probably not much new news for most of it's target demographic (silicon valley).

I've read Aaronson's post and his counterview seems to boil down to generalized pessimism, which I don't find to be especially illuminating. However, he does raise the good point about solving subproblems first. Of course, Kurzweil spends a good portion of TSIN summarizing progress in sub-problems of reverse engineering the brain.

There appears to be a good deal of neuroscience research going on right now, but perhaps not nearly enough serious computational neuroscience and AGI research as we may like, but it is still proceeding. MIT's lab is no joke.

There is some sort of strange academic stigma though as Legg discusses on his blog - almost like a silent conspiracy against serious academic AGI. Nonetheless, there appears to be no stigma against the precursors, which is where one needs to start anyway.

I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there's a significant probability that we'll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.

I do not think we can infer their views on this matter based on their behaviour. Given the general awareness of the meme I suspect a good portion of academics in general have heard of it. That doesn't mean that anyone will necessarily change their behavior.

I agree this seems really odd, but then I think - how have I changed my behavior? And it dawns on me that this is a much more complex topic.

For the IEEE singularity issue - just google it .. something like "IEEE Singularity special issue". I'm having slow internet atm.

This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.

Because any software problem can become easy given enough hardware.

For example, we have enough neuroscience data to build reasonably good models of the low level cortical circuits today . We also know the primary function of perhaps 5% of the higher level pathways. For much of that missing 95% we have abstract theories but are still very much in the dark.

With enough computing power we could skip tricky neuroscience or AGI research and just string together brain-ish networks built on our current cortical circuit models, throw them in a massive VR game-world sim that sets up increasingly difficult IQ puzzles as a fitness function, and use massive evolutionary search to get something intelligent.

The real solution may end up looking something like that, but will probably use much more human intelligence and be less wasteful of our computational intelligence.

Comment author: timtyler 01 October 2010 05:07:43PM 0 points [-]

This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.

Because any software problem can become easy given enough hardware.

That would have been a pretty naive reply - since we know from public key crypto that it is relatively easy to make really difficult problems that require stupendous quantities of hardware to solve.

Comment author: timtyler 01 October 2010 05:13:44PM *  0 points [-]

A poll of mainstream scientists sounds like a poor way to get an estimate of the date of arrival of "human-level" machine minds - since machine intelligence is a complex and difficult field - and so most outsiders will probably be pretty clueless.

Also, 15 years is still a long way off: people may think 5 years out, when they are feeling particularly far sighted. Expecting major behavioral changes from something 15 years down the line seems a bit unreasonable.

Comment author: timtyler 01 October 2010 05:03:01PM 0 points [-]

I'd actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct - AGI will arrive around when Moore's law makes it so.

Of course, neither Kurzweil nor Moravec think any such thing - both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.

Comment author: timtyler 11 September 2010 09:21:19PM *  1 point [-]

The biggest optimist I have come across is Peter Voss. His estimate in 2009 was around 8 years - 7:00 in. However, he obviously has something to sell - so maybe we should not pay too much attention to his opinion - due to the signalling effects associated with confidence.

Comment author: NancyLebovitz 11 September 2010 10:38:00PM 0 points [-]

Optimist or pessimist?

Comment author: timtyler 12 September 2010 07:06:36AM *  0 points [-]