JoshuaZ comments on Open Thread June 2010, Part 3 - Less Wrong

6 Post author: Kevin 14 June 2010 06:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (606)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 14 June 2010 02:07:37PM 3 points [-]

None of those people are AI theorists so it isn't clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I'd be curious what citation you have for the Hawking claim). From the computer scientists I've talked to, the impression I get is that they see AI as such a failure that most of them just aren't bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There's also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won't. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.

Note also that nothing in Yoreth's post actually relied on or argued that there won't be moderately smart AI so it doesn't go against what he's said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth's second argument applies roughly to any level of intelligence. So overall, I don't think the point about those individuals does much to address the argument.

Comment deleted 14 June 2010 03:01:10PM *  [-]
Comment author: JoshuaZ 14 June 2010 03:07:49PM 10 points [-]

That's a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn't likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that's just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I'll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.

Comment author: SilasBarta 14 June 2010 09:19:45PM 3 points [-]

What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.

So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?

Comment author: Vladimir_Nesov 14 June 2010 03:07:50PM *  3 points [-]

What does an average AI prof know that a physics graduate who can code doesn't know?

Machine learning, more math/probability theory/belief networks background?

Comment deleted 14 June 2010 03:15:02PM [-]
Comment author: Vladimir_Nesov 14 June 2010 03:33:51PM *  2 points [-]

There is ton of knowledge about probabilistic processes defined by networks in various ways, numerical methods for inference in them, clustering, etc. All the fundamental stuff in this range has applications to physics, and some of it was known in physics before getting reinvented in machine learning, so in principle a really good physics grad could know that stuff, but it's more than standard curriculum requires. On the other hand, it's much more directly relevant to probabilistic methods in machine learning. Of course both should have good background in statistics and bayesian probability theory, but probabilistic analysis of nontrivial processes in particular adds unique intuitions that a physics grad won't necessarily possess.

Comment author: timtyler 15 June 2010 09:05:43PM *  2 points [-]

Re: "What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy."

A very odd opinion. We have 60 years of study of the field, and have learned quite a bit, judging by things like the state of translation and speech recognition.

Comment author: Daniel_Burfoot 14 June 2010 08:46:16PM 2 points [-]

I disagree with this, basically because AI is a pre-paradigm science.

I am gratified to find that someone else shares this opinion.

What does an average AI prof know that a physics graduate who can code doesn't know?

A better way to phrase the question might be: what can an average AI prof. do that a physics graduate who can code, can't?

Comment deleted 14 June 2010 10:47:12PM [-]
Comment author: CarlShulman 15 June 2010 12:46:08PM 4 points [-]

I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.

Statistics vs machine learning: FIGHT!

Comment author: SilasBarta 15 June 2010 12:20:17AM 0 points [-]

Could you clarify exactly what Hutter has done that has advanced the frontier? I used to be very nearly a "Hutter enthusiast", but I eventually concluded that his entire work is:

"Here's a few general algorithms that are really good, but take way too long to be of any use whatsoever."

Am I missing something? Is there something of his I should read that will open my eyes to the ease of mechanizing intelligence?

Comment author: timtyler 15 June 2010 09:10:54PM 1 point [-]

This seems like a fairly reasonable description of the work's impact:

"Another theme that I picked up was how central Hutter’s AIXI and my work on the universal intelligence measure has become: Marcus and I were being cited in presentations so often that by the last day many of the speakers were simply using our first names. As usual there were plenty of people who disagree with our approach, however it was clear that our work has become a major landmark in the area."

Comment author: SilasBarta 16 June 2010 04:36:54PM 0 points [-]

But why does it get those numerous citations? What real-world, non-academic consequences have resulted from this massive usage of Hutter's intelligence definition, which would distinguish it from a mere mass frenzy?

Comment author: timtyler 16 June 2010 05:00:00PM *  0 points [-]

No time for a long explanation from me - but "universal intelligence" seems important partly since it shows how simple an intelligent agent can be - if you abstract away most of its complexity into a data-compression system. It is just a neat way to break down the problem.

Comment deleted 15 June 2010 08:33:33AM [-]
Comment author: timtyler 15 June 2010 09:07:39PM 0 points [-]

Surely everyone has been doing that from the beginning.

Comment author: whpearson 14 June 2010 03:08:14PM 1 point [-]

The AI prof is more likely to know more things that don't work and the difficulty of finding things that do. Which is useful knowledge when predicting the speed of AI development, no?

Comment deleted 14 June 2010 03:15:39PM [-]
Comment author: whpearson 14 June 2010 03:22:49PM 0 points [-]

Trying to model the world as crisp logical statements a la block worlds for example.

Comment deleted 14 June 2010 04:13:51PM [-]
Comment author: whpearson 14 June 2010 04:25:51PM 0 points [-]

Yup... which things were you asking for? Examples of things that do work? You don't actually need to find them to know that they are hard to find!

Comment author: MatthewW 14 June 2010 07:10:44PM 2 points [-]

I think Hofstadter could fairly be described as an AI theorist.

Comment author: Emile 17 June 2010 02:14:59PM 2 points [-]

So could Robin Hanson.