timtyler comments on Call for new SIAI Visiting Fellows, on a rolling basis - Less Wrong

29 Post author: AnnaSalamon 01 December 2009 01:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (264)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 09 December 2009 11:46:27PM *  0 points [-]

re: "almost everyone [...] said they hoped that uploads would be developed before AGI"

IMO, that explains much of the interest in uploads: wishful thinking.

Comment author: gwern 10 December 2009 12:20:53AM 5 points [-]

Reminds me of Kevin Kelly's The Maes-Garreau Point:

"Nonetheless, her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die. Isn’t that a coincidence?"

Possibly the most single disturbing bias-related essay I've read, because I realized as I was reading it that my own uploading prediction was very close to my expected lifespan (based on my family history) - only 10 or 20 years past my death. It surprises me sometimes that no one else on LW/OB seems to've heard of Kelly's Maes-Garreau Point.

Comment author: CarlShulman 10 December 2009 03:04:41PM *  6 points [-]

It's an interesting methodology, but the Maes-Garreau data is just terrible quality. For every person I know on that list, the attached point estimate is misleading to grossly misleading. For instance, it gives Nick Bostrom as predicting a Singularity in 2004, when Bostrom actually gives a broad probability distribution over the 21st century, with much probability mass beyond it as well. 2004 is in no way a good representative statistic of that distribution, and someone who had read his papers on the subject or emailed him could easily find that out. The Yudkowsky number was the low end of a range (if I say that between 100 and 500 people were at an event, that's not the same thing as an estimate of 100 people!), and subsequently disavowed in favor of a broader probability distribution regardless. Marvin Minsky is listed as predicting 2070, when he has also given an estimate of most likely "5 to 500" years, and this treatment is inconsistent with the treatment of the previous two estimates. Robin Hanson's name is spelled incorrectly, and the figure beside his name is grossly unrepresentative of his writing on the subject (available for free on his website for the 'researcher' to look at). The listing for Kurzweil gives 2045, which is when Kurzweil expects a Singularity, as he defines it (meaning just an arbitrary benchmark for total computing power), but in his books he suggests that human brain emulation and life extension technology will be available in the previous decade, which would be the "living long enough to live a lot longer" break-even point if he were right about that.

I'm not sure about the others on that list, but given the quality of the observed date, I don't place much faith in the dataset as a whole. It also seems strangely sparse: where is Turing, or I.J. Good? Dan Dennett, Stephen Hawking, Richard Dawkins, Doug Hofstadter, Martin Rees, and many other luminaries are on record in predicting the eventual creation of superintelligent AI with long time-scales well after their actuarially predicted deaths. I think this search failed to pick up anyone using equivalent language in place of the term 'Singularity,' and was skewed as a result. Also, people who think that a technological singularity or the like will probably not occur for over 100 years are less likely to think it an important issue to talk about right now, and so are less likely to appear in a group selected by looking for attention-grabbing pronouncements.

A serious attempt at this analysis would aim at the following:

1) Not using point estimates, which can't do justice to a probability distribution. Give a survey that lets people assign their probability mass to different periods, or at least specifically ask for an interval, e.g. 80% confidence that an intelligence explosion will have begun/been completed after X but before Y.

2) Emailing the survey to living people to get their actual estimates.

3) Surveying a group identified via some other criterion (like knowledge of AI, note that participants at the AI@50 conference were electronically surveyed on timelines to human-level AI) to reduce selection effects.

Comment author: gwern 10 December 2009 09:52:44PM 0 points [-]

It's an interesting methodology, but the Maes-Garreau data is just terrible quality.

See, this is the sort of response I would expect: a possible bias is identified, some basic data is collected which suggests that it's plausible, and then we begin a more thorough inspection. Complete silence, though, was not.

where is Turing

Turing would be hard to do. He predicts in 1950 a machine could pass his test 70% of the time in another 50 years (2000; Turing was born 1912, so he would've been 88), and that this would be as good as a real mind. But is this a date for the Singularity or a genuine consciousness?

Comment author: CarlShulman 11 December 2009 01:18:54AM 1 point [-]

Yes, I considered that ambiguity, and certainly you couldn't send him a survey. But it gives a lower bound, and Turing does talk about machines equaling or exceeding human capacities across the board.

Comment author: gwern 11 December 2009 02:21:40AM 0 points [-]

Hm. Would it be justifiable to extrapolate Turing's predictions? Because we know that he was off by at least a decade on just the AI; presumably any Singularity would be have to be that much or more.

Comment author: Vladimir_Nesov 10 December 2009 12:10:18PM *  1 point [-]

It surprises me sometimes that no one else on LW/OB seems to've heard of Kelly's Maes-Garreau Point.

It would be very surprising if you are right. I expect most of the people who have thought about the question of how such estimates could be biased would think of this idea within the first several minutes (even if without experimental data).

Comment author: gwern 10 December 2009 01:40:18PM 0 points [-]

It may be an obvious point on which to be biased, but how many of such people then go on to work out birthdates and prediction dates or to look for someone else's work on those lines like Maes-Garreau?

Comment author: CarlShulman 10 December 2009 03:26:45PM 1 point [-]

A lot of folk at SIAI have looked at and for age correlations.

Comment author: gwern 10 December 2009 09:36:22PM 1 point [-]

And found?

Comment author: CarlShulman 11 December 2009 02:00:18AM *  6 points [-]

1) Among those sampled, the young do not seem to systematically predict a later Singularity.

2) People do update their estimates based on incremental data (as they should), so we distinguish between estimated dates, and estimated time-from-present.

2a) A lot of people burned by the 1980s AI bubble shifted both of those into the future.

3) A lot of AI folk with experience from that bubble have a strong taboo against making predictions for fear of harming the field by raising expectations. This skews the log of public predictions.

4) Younger people working on AGI (like Shane Legg, Google's Moshe Looks) are a self-selected group and tend to think that it is relatively close (within decades,and their careers).

5) Random smart folk, not working on AI (physicists, philosophers, economists), of varied ages, tend to put broad distributions on AGI development with central tendencies in the mid-21st century.

Comment author: gwern 14 August 2012 12:53:23AM *  0 points [-]

Is there any chance of the actual data or writeups being released? It's been almost 3 years now.

Comment author: CarlShulman 14 August 2012 01:12:54AM 0 points [-]

Lukeprog has a big spreadsheet. I don't know his plans for it.

Comment author: gwern 14 August 2012 01:21:37AM 0 points [-]

Hm... I wonder if that's the big spreadsheet ksotala has been working on for a while?

Comment author: gwern 11 December 2009 05:24:30PM *  0 points [-]
  1. evidence for, apparently
  2. Yes but shouldn't we use the earliest predictions by a person? Even a heavily biased person may produce reasonable estimates given enough data. The first few estimates are likely to be based most on intuition - or bias, in another word.
  3. But which way? There may be a publication bias to 'true believers' but then there may also be a bias towards unobjectionably far away estimates like Minsky's 5 to 500 years. (One wonders what odds Minsky genuinely assigns to the first AI being created in 2500 AD.)
  4. Reasonable. Optimism is an incentive to work, and self-deception is probably relevant.
  5. Evidence for, isn't it? Especially if they assign even weak belief in significant life-extension breakthroughs, ~2050 is within their conceivable lifespan (since they know humans currently don't live past ~120, they'd have to be >~80 to be sure of not reaching 2050).
Comment author: mattnewport 10 December 2009 12:28:03AM 0 points [-]

Kelly doesn't give references for the dates he cites as predictions for the singularity. Did Eliezer really predict at some point that the singularity would occur in 2005? That sounds unlikely to me.

Comment author: mattnewport 10 December 2009 12:31:15AM *  2 points [-]

Hmm, I found this quote on Google:

A few years back I would have said 2005 to 2020. I got this estimate by taking my real guess at the Singularity, which was around 2008 to 2015, and moving the dates outward until it didn't seem very likely that the Singularity would occur before then or after then.

Seems to me that Kelly didn't really interpret the prediction entirely reasonably (picking the earlier date) but the later date would not disconfirm his theory either.

Comment author: Zack_M_Davis 10 December 2009 12:54:18AM *  2 points [-]

Did Eliezer really predict at some point that the singularity would occur in 2005? That sounds unlikely to me.

Eliezer has disavowed many of his old writings:

I’ve been online since a rather young age. You should regard anything from 2001 or earlier as having been written by a different person who also happens to be named “Eliezer Yudkowsky”. I do not share his opinions.

But re the 2005 listing, cf. the now-obsolete "Staring Into the Singularity" (2001):

I do not "project" when the Singularity will occur. I have a "target date". I would like the Singularity to occur in 2005, which I think I would have a reasonable chance of doing via AI if someone handed me a hundred million dollars a year. The Singularity Institute would like to finish up in 2008 or so.

Comment author: Eliezer_Yudkowsky 10 December 2009 01:26:57AM 3 points [-]

Doesn't sound much like the Eliezer you know, does it...

Comment author: timtyler 10 December 2009 06:38:34PM *  -2 points [-]

Re: "Kelly doesn't give references for the dates he cites as predictions for the singularity."

That sucks. Also, "the singularity" is said to occur when minds get uploaded?!?

And "all agreed that once someone designed the first super-human artificial intelligence, this AI could be convinced to develop the technology to download a human mind immediately"?!?

I have a rather different take on things on my "On uploads" video:

http://www.youtube.com/watch?v=5myjWld1qN0