JoshuaZ comments on Open Thread June 2010, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (606)
None of those people are AI theorists so it isn't clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I'd be curious what citation you have for the Hawking claim). From the computer scientists I've talked to, the impression I get is that they see AI as such a failure that most of them just aren't bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There's also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won't. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.
Note also that nothing in Yoreth's post actually relied on or argued that there won't be moderately smart AI so it doesn't go against what he's said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth's second argument applies roughly to any level of intelligence. So overall, I don't think the point about those individuals does much to address the argument.
That's a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn't likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that's just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I'll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.
So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?
Machine learning, more math/probability theory/belief networks background?
There is ton of knowledge about probabilistic processes defined by networks in various ways, numerical methods for inference in them, clustering, etc. All the fundamental stuff in this range has applications to physics, and some of it was known in physics before getting reinvented in machine learning, so in principle a really good physics grad could know that stuff, but it's more than standard curriculum requires. On the other hand, it's much more directly relevant to probabilistic methods in machine learning. Of course both should have good background in statistics and bayesian probability theory, but probabilistic analysis of nontrivial processes in particular adds unique intuitions that a physics grad won't necessarily possess.
Re: "What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy."
A very odd opinion. We have 60 years of study of the field, and have learned quite a bit, judging by things like the state of translation and speech recognition.
I am gratified to find that someone else shares this opinion.
A better way to phrase the question might be: what can an average AI prof. do that a physics graduate who can code, can't?
Statistics vs machine learning: FIGHT!
Could you clarify exactly what Hutter has done that has advanced the frontier? I used to be very nearly a "Hutter enthusiast", but I eventually concluded that his entire work is:
"Here's a few general algorithms that are really good, but take way too long to be of any use whatsoever."
Am I missing something? Is there something of his I should read that will open my eyes to the ease of mechanizing intelligence?
This seems like a fairly reasonable description of the work's impact:
"Another theme that I picked up was how central Hutter’s AIXI and my work on the universal intelligence measure has become: Marcus and I were being cited in presentations so often that by the last day many of the speakers were simply using our first names. As usual there were plenty of people who disagree with our approach, however it was clear that our work has become a major landmark in the area."
But why does it get those numerous citations? What real-world, non-academic consequences have resulted from this massive usage of Hutter's intelligence definition, which would distinguish it from a mere mass frenzy?
No time for a long explanation from me - but "universal intelligence" seems important partly since it shows how simple an intelligent agent can be - if you abstract away most of its complexity into a data-compression system. It is just a neat way to break down the problem.
Surely everyone has been doing that from the beginning.
The AI prof is more likely to know more things that don't work and the difficulty of finding things that do. Which is useful knowledge when predicting the speed of AI development, no?
Trying to model the world as crisp logical statements a la block worlds for example.
Yup... which things were you asking for? Examples of things that do work? You don't actually need to find them to know that they are hard to find!
I think Hofstadter could fairly be described as an AI theorist.
So could Robin Hanson.