Why can't AI researchers formulate and test theories the way high-energy physicists do?
But surely they do? Every proposal for a way of doing AI (I'm reading this as AGI here) is a hypothesis about how an AI could be created, and the proposers' failure to create an AI is the refutation of that hypothesis. Science as normal. Talk of physics envy is just an excuse for failure. The problem with excusing failure is that it leaves you with failure, when the task is to succeed.
Every proposal for turning lead into gold is a hypothesis about how lead could be turned into gold, but this doesn't make alchemy science. Good science progresses through small problems conclusively solved, building on each other, not by trying and repeatedly failing to reach the grand goal.
The paper "Strong Inference" by John R. Platt is a meta-analysis of scientific methodology published in Science in 1964. It starts off with a wonderfully aggressive claim:
The paper starts out by observing that some scientific fields progress much more rapidly than others. Why should this be?
The definition of Strong Inference, according to Platt, is the formal, explicit, and regular adherence to the following procedure:
This seems like a simple restatement of the scientific method. Why does Platt bother to tell us something we already know?
Platt gives us some nice historical anecdotes of strong inference at work. One is from high-energy physics:
The paper emphasizes the importance of systematicity and rigor over raw intellectual firepower. Roentgen, proceeding systematically, shows us the meaning of haste:
Later, Platt argues against the overuse of mathematics:
(Fast forward to the present, where we have people proving the existence of Nash equilibria in robotics and using Riemannian manifolds in computer vision, when robots can barely walk up stairs and the problem of face detection still has no convincing solution.)
One of the obstacles to hard science is that hypotheses must come into conflict, and one or the other must eventually win. This creates sociological trouble, but there's a solution:
Finally, Platt suggests that all scientists continually bear in mind The Question:
----
Now, LWers, I am not being rhetorical, I put these questions to you sincerely: Is artificial intelligence, rightly considered, an empirical science? If not, what is it? Why doesn't AI make progress like the fields mentioned in Platt's paper? Why can't AI researchers formulate and test theories the way high-energy physicists do? Can a field which is not an empirical science ever make claims about the real world?
If you have time and inclination, try rereading my earlier post on the Compression Rate Method, especially the first part, in the light of Platt's paper.
Edited thanks to feedback from Cupholder.