Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

"Robot scientists can think for themselves"

-3 Post author: CronoDAS 02 April 2009 09:16PM

I recently saw this Reuters article on Yahoo News. In typical science reporting fashion, the headline seems to be pure hyperbole - does anyone here know enough to clarify what the groups referenced have actually achieved?

This links represent what I could find:

Homepage of the "Robot Scientist" project:http://www.aber.ac.uk/compsci/Research/bio/robotsci/ 

Homepage of Hod Lipson: http://www.mae.cornell.edu/lipson/

Hod Lipson's 2007 paper "Automated reverse engineering of nonlinear dynamical systems" (pdf)

Comments (11)

Comment author: jimrandomh 02 April 2009 10:13:43PM 1 point [-]

The Reuters article refers to two unrelated pieces of research. The first is about a laboratory robot (abstract) that automates some measurements, with software to analyze the results and decide what to measure next. Useful, but not really related to general-purpose AI.

The other is about a method for fitting differential equations to time series. I'm sure it has some applications somewhere, but I don't see any obvious way to apply it to AI.

Comment author: thomblake 03 April 2009 06:01:02PM 0 points [-]

but not really related to general-purpose AI.

Amongst people who actually build robots, it's generally understood that you don't get general-purpose AI by creating a 'general intelligence' and letting it run; it seems much more likely that we'll need a lot of small, task-specific systems that can work together.

Comment author: Eliezer_Yudkowsky 03 April 2009 06:16:30PM 3 points [-]

Amongst people who actually install air conditioners, it's generally understood that you get general-purpose AI by adding freon.

The roboticists I know don't claim to know how to build AGI. Why would they?

Comment author: thomblake 03 April 2009 06:51:44PM 0 points [-]

The roboticists I know don't claim to know how to build AGI. Why would they?

Because they read up on artificial intelligence, study philosophy of mind, and build systems that exhibit intelligent behavior. And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.

Comment author: SoullessAutomaton 03 April 2009 06:55:42PM 3 points [-]

And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.

Anecdotally, in my experience, artificial intelligence is something of a God of the Gaps for computer science--techniques that work are appropriated by others, relabelled, and put to work. Someone who claims to be an AI researcher is essentially saying "I am studying things that don't actually work yet".

This is probably related to the long "AI winter" caused by the collapse of hype.

Comment author: thomblake 03 April 2009 06:57:14PM 1 point [-]

It should be noted that the "AI winter" is somewhat apocryphal, and a lot of the much-maligned techniques of GOFAI (or things similar to them) are being used to great effect in small chunks that work together.

Comment author: SoullessAutomaton 03 April 2009 07:02:13PM 1 point [-]

Yes, but how often do you hear those GOFAI techniques described as AI except in AI textbooks?

Speaking of which, I have a copy of Russell and Norvig's AIMA on my desk right now, and in fact I should probably be spending more time doing exercises from it and less time posting on LW...

Comment author: Vladimir_Nesov 03 April 2009 07:06:13PM 0 points [-]

Not so, there are lots of problems in CS that you can't naturally label as AI problems. If you go in the opposite direction, saying that AI by definition solves all problems, then you can say that whatever unsolved problem you are working on, you are actually working on a special case of AI. But that's pretty void.

Comment author: Vladimir_Nesov 03 April 2009 06:22:07PM 0 points [-]

You are channeling too much certainty through the reference to authority. We are too far away from seeing the solution to describe its form in detail, much less to defer to the popular perception.

Comment author: thomblake 03 April 2009 06:53:17PM 1 point [-]

You are channeling too much certainty through the reference to authority. We are too far away from seeing the solution to describe its form in detail, much less to defer to the popular perception.

Rather in line with my point. To claim that this is not really related to general-purpose AI, when people who build the closest things to thinking machines that we have would disagree with that sentiment, did not seem warranted. I was showing that the statement was without merit due to informed folks thinking otherwise.

Comment author: Vladimir_Nesov 03 April 2009 07:01:43PM *  0 points [-]

Err, my point is obviously that the AI researchers are too far away from seeing the solution, so their opinion shouldn't count as anything approaching certainty. This is to point out the falsity of connotation of your original comment. Not to mention that there is actually no consensus among the experts, a factual error in your statement.