timtyler comments on New Year's Predictions Thread - Less Wrong

18 Post author: MichaelVassar 30 December 2009 09:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (426)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 31 December 2009 04:44:48PM *  7 points [-]

What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off.

It hasn't worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on -- those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.

I don't think any of these people are stupid or crazy (which is why I don't mention Mentifex in the same breath as them), and I wouldn't try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don't believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).

Indeed, how do you know that the NSA doesn't have such a machine chained up in its basement right now?

My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.

Besides, the current state of the world is not suggestive of the presence of AIs in it.

ETA: But this is becoming a digression from the purpose of the thread.

Comment author: timtyler 31 December 2009 07:02:18PM 3 points [-]

Thanks for sharing. As previously mentioned, we share a generally negative impression of the chances of success in the next ten years.

However, it appears that I give more weight to the possibility that there are researchers within companies, within government organisations, or within other countries who are doing better than you suggest - or that there will be at some time over the next ten years. For example, Voss's estimate (from a year ago) was "8 years" - see: http://www.vimeo.com/3461663

We also appear to differ on our estimates of how important knowledge of how brains work will be. I think there is a good chance that it will not be very important.

Ignorance about NSA projects might not affect our estimates, but perhaps it should affect our confidence in them. An NSA intelligent agent might well remain hidden - on national security grounds. After all, if China's agent found out for sure that America had an agent too, who knows what might happen?

Comment author: PhilGoetz 31 December 2009 11:07:40PM 2 points [-]

I would guess that the NSA is more interested in quantum computing than in AI.

Comment author: timtyler 01 January 2010 10:41:49AM 0 points [-]

They are the National Security Agency. Which of those areas presents the biggest potential threat to national security? With a machine intelligence, you could build all the quantum computers you would ever need.