timtyler comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 03 September 2010 12:04:35PM *  2 points [-]

I think that it would be better for Eliezer and for the world at large if Eliezer seriously considered the possibility that he's vastly overestimated his chances of building a Friendly AI.

We haven't heard Eliezer say how likely he believes it is that he creates a Friendly AI. He has been careful to not to discuss that subject. If he thought his chances of success were 0.5% then I would expect him to make exactly the same actions.

(ETA: With the insertion of 'relative' I suspect I would more accurately be considering the position you are presenting.)

Comment author: multifoliaterose 03 September 2010 12:33:35PM *  3 points [-]

Right, so in my present epistemological state I find it extremely unlikely that Eliezer will succeed in building a Friendly AI. I gave an estimate here which proved to be surprisingly controversial.

The main points that inform my thinking here are:

  1. The precedent for people outside of the academic mainstream having mathematical/scientific breakthroughs in recent times is extremely weak. In my own field of pure math I know of only two people without PhD's in math or related fields who have produced something memorable in the last 70 years or so, namely Kurt Heegner and Martin Demaine. And even Heegner and Demaine are (relatively speaking) quite minor figures. It's very common for self-taught amateur mathematicians to greatly underestimate the difficulty of substantive original mathematical research. I find it very likely that the same is true in virtually all scientific fields and thus have an extremely skeptical Bayesian prior against any proposition of the type "amateur intellectual X will solve major scientific problem Y."

  2. From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson's The Singularity is Far.

The fact that Eliezer does not appear to have seriously contemplated or addressed the the two points above and their implications diminishes my confidence in his odds of success still further.

Comment author: timtyler 10 September 2010 09:15:10PM *  1 point [-]

From having talked with computer scientists and AI researchers, I have a very strong impression that the consensus is that AGI is way out of reach at present. See for example points #1 and #5 of Scott Aaronson's The Singularity is Far.

I don't think there's any such consensus. Most of those involved know that they don't know with very much confidence. For a range of estimates, see the bottom of:

http://alife.co.uk/essays/how_long_before_superintelligence/

Comment author: multifoliaterose 10 September 2010 10:24:33PM 2 points [-]

For what it's worth, in saying "way out of reach" I didn't mean "chronologically far away," I meant "far beyond the capacity of all present researchers." I think it's quite possible that AGI is just 50 years away.

I think that the absence of plausibly relevant and concrete directions for AGI/FAI research, the chance of having any impact on the creation of an FAI through research is diminished by many orders of magnitude.

If there are plausibly relevant and concrete directions for AGI/FAI research then the situation is different, but I haven't heard examples that I find compelling.

Comment author: timtyler 11 September 2010 09:42:26AM *  1 point [-]

"Just 50 years?" Shane Legg's explanation of why his mode is at 2025:

http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/

If 15 years is more accurate - then things are a bit different.

Comment author: multifoliaterose 11 September 2010 01:59:45PM *  1 point [-]

"Just 50 years?" Shane Legg's explanation of why his mode is at 2025:

Thanks for pointing this out. I don't have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.

I'd recur to CarlShulman's remark about selection bias here. I look forward to seeing the results of the hypothetical Bostrom survey and the SIAI collection of all public predictions.

If 15 years is more accurate - then things are a bit different.

I agree. There's still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in "crunch" mode (amassing resources specifically directed at future FAI research).

Comment author: Will_Newsome 14 September 2010 02:28:53AM *  1 point [-]

I agree. There's still an issue of a lack of concrete directions of research at present but if 15 years is accurate then I agree with Eliezer that we should be in "crunch" mode (amassing resources specifically directed at future FAI research).

At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we're still in serious crunch time. The tails are fat in both directions. (This is important because it takes away a lot of the Pascalian flavoring that makes people (justifiably) nervous when reasoning about whether or not to donate to FAI projects: 15% chance of FOOM before 2020 just feels very different to a bounded rationalist than a .5% chance of FOOM before 2020.)

Thanks for pointing this out. I don't have the subject matter knowledge to make an independent assessment of the validity of the remarks in the linked article, but it makes points that I had not seen before.

For what it's worth, Shane Legg is a pretty reasonable fellow who understands that AGI isn't automatically good, so we can at least rule out that his predictions are tainted by the thoughts of "Yay, technology is good, AGI is close!" that tend to cast doubt on the lack of bias in most AGI researchers' and futurists' predictions. He's familiar with the field and indeed wrote the book on Machine Super Intelligence. I'm more persuaded by Legg's arguments than most at SIAI, though, and although this isn't a claim that is easily backed by evidence, the people at SIAI are really freakin' good thinkers and are not to be disagreed with lightly.

Comment author: multifoliaterose 14 September 2010 05:42:38AM 0 points [-]

At any rate, most rationalists who have seriously considered the topic will agree that there is a large amount of probability mass 15 years into the future: large enough that even if the median estimate till AGI is 2050, we're still in serious crunch time. The tails are fat in both directions.

I recur to my concern about selection effects. If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?

I do think that it's sufficiently likely that the people in academia have erred that it's worth my learning more about this topic and spending some time pressing people within academia on this point. But at present I assign a low probability (~5%) to the notion that the mainstream has missed something so striking as a large probability of a superhuman AI within 15 years.

Incidentally, I do think that decisive paradigm changing events are very likely to occur over the next 200 years and that this warrants focused effort on making sure that society is running as possible (as opposed to doing pure scientific research with the justification that it may pay off in 500 years).

Comment author: Will_Newsome 14 September 2010 06:14:08AM 1 point [-]

A fair response to this requires a post that Less Wrong desperately needs to read: People Are Crazy, the World Is Mad. Unfortunately this requires that I convince Michael Vassar or Tom McCabe to write it. Thus, I am now on a mission to enlist the great power of Thomas McCabe.

(A not-so-fair response: you underestimate the extent to which academia is batshit insane just like nearly every individual in it, you overestimate the extent to which scientists ever look outside of their tiny fields of specialization, you overestimate the extent to which the most rational scientists are willing to put their reputations on the line by even considering much less accepting an idea as seemingly kooky as 'human-level AI by 2035', and you underestimate the extent to which the most rational scientists are starting to look at the possibility of AGI in the next 50 years (which amounts to non-trivial probability mass in the next 15). I guess I don't know who the very best scientists are. (Dawkins and Tooby/Cosmides impress me a lot; Tooby was at the Summit. He signed a book that's on my table top. :D ) Basically, I think you're giving academia too much credit. These are all assertions, though; like I said, this response is not a fair one, but this way at least you can watch for a majoritarian bias in your thinking and a contrarian bias in my arguments.)

Comment author: multifoliaterose 14 September 2010 06:39:22AM 0 points [-]

I look forward to the hypothetical post.

As for your "not-so-fair response" - I seriously doubt that you know enough about academia to have any confidence in this view. I think that first hand experience is crucial to developing a good understanding of the strengths and weaknesses of academia.

(I say this with all due respect - I've read and admired some of your top level posts.)

Comment author: jacob_cannell 15 September 2010 01:47:13AM *  0 points [-]

If it really is reasonable to place a large amount of probability mass 15 years into the future, why are virtually all mainstream scientists (including the best ones) apparently oblivious to this?

How do you support this? Have you done a poll of mainstream scientists (or better yet - the 'best' ones)? I haven't seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50/50. It's also important to note that the IEEE editor was against the Singularity-hypothesis - if I remember correctly, so there may be some bias there.

And whose opinions should we count exactly? Do we value the opinions of historians, economists, psychologists, chemists, geologists, astronomers, etc etc as much as we value the opinions of neuroscientists, computer scientists, and engineers?

I'd actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct - AGI will arrive around when Moore's law makes it so.

200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap - not a general poll of scientists.

The semiconductor industry predicts it's own future pretty accurately, but they don't invite biologists, philosophers or mathematicians to those meetings. Their roadmap and moore's law in general is the most relevant for predicting AGI.

I base my own internal estimate on my own knowledge of the relevant fields - partly because this is so interesting and important that one should spend time investigating it.

I honestly suspect that most people who reject the possibility of near-term AGI have some deeper philosophical rejection.

If you are a materialist then intelligence is just another algorithm - something the brain does, and something we can build. It is an engineering problem and subject to the same future planning that we use for other engineering challenges.

Comment author: multifoliaterose 15 September 2010 02:01:52AM 2 points [-]

How do you support this? Have you done a poll of mainstream scientists (or better yet - the 'best' ones)?

I have not done a poll of mainstream scientists. Aside from Shane Legg, the one mainstream scientist who I know of who has written on this subject is Scott Aaronson in his The Singularity Is Far article.

I was not claiming that I have strong grounds for confidence in my impressions of expert views. But it is the case if there's a significant probability that we'll see AGI over the next 15 years, mainstream scientists are apparently oblivious to this. They are not behaving as I would expect them to if they believed that AGI is 15 years off.

I haven't seen a poll exactly, but when IEEE ran a special on the Singularity, the opinions were divided almost 50/50. It's also important to note that the IEEE editor was against the Singularity-hypothesis - if I remember correctly, so there may be some bias there.

Can you give a reference?

I'd actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct - AGI will arrive around when Moore's law makes it so.

This is interesting. I presume then that they believe that the software aspect of the problem is easy. Why do they believe this.

200 years? There is wisdom in some skepticism, but that seems excessive. If you hold such a view, you should analyze it with respect to its fundamental support based on a predictive technological roadmap - not a general poll of scientists.

I have sufficiently little subject matter knowledge so that it's reasonable for me to take the outside view here and listen to people who seem to know what they're talking about rather than attempting to do a detailed analysis myself.

Comment author: timtyler 01 October 2010 05:03:01PM 0 points [-]

I'd actually guess that at this point in time, a significant chunk of the intelligence of say Silicon Valley believes that the default Kurzweil/Moravec view is correct - AGI will arrive around when Moore's law makes it so.

Of course, neither Kurzweil nor Moravec think any such thing - both have estimates of when a computer with the same processing power as the human brain a considerable while before they think the required software will be developed.

Comment author: timtyler 11 September 2010 09:21:19PM *  1 point [-]

The biggest optimist I have come across is Peter Voss. His estimate in 2009 was around 8 years - 7:00 in. However, he obviously has something to sell - so maybe we should not pay too much attention to his opinion - due to the signalling effects associated with confidence.

Comment author: NancyLebovitz 11 September 2010 10:38:00PM 0 points [-]

Optimist or pessimist?

Comment author: timtyler 12 September 2010 07:06:36AM *  0 points [-]