In That alien message, Eliezer made some pretty wild claims:
My moral - that even Einstein did not come within a million light-years of making efficient use of sensory data.
Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense. A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis - perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration - by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.
They never suspected a thing. They weren't very smart, you see, even before taking into account their slower rate of time. Their primitive equivalents of rationalists went around saying things like, "There's a bound to how much information you can extract from sensory data." And they never quite realized what it meant, that we were smarter than them, and thought faster.
In the comments, Will Pearson asked for "some form of proof of concept". It seems that researchers at Cornell - Schmidt and Lipson - have done exactly that. See their video on Guardian Science:
'Eureka machine' can discover laws of nature - The machine formulates laws by observing the world and detecting patterns in the vast quantities of data it has collected
Researchers at Cambridge and Aberystwith have gone one step further and implemented an AI system/robot to perform scientific experiments:
Researchers at Aberystwyth University in Wales and England's University of Cambridge report in Science today that they designed Adam - they describe how the bot operates by relating how he carried out one of his tasks, in this case to find out more about the genetic makeup of baker's yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems. Using artificial intelligence, Adam hypothesized that certain genes in baker's yeast code for specific enzymes that catalyze biochemical reactions. The robot devised experiments to test these beliefs, ran the experiments, and interpreted the results.
The crucial question is: what can we learn about the likely effectiveness of a "superintelligent" AI from the behavior of these AI programs? First of all, let us be clear: this AI is *not* a "superintellgience", so we shouldn't expect it to perform at that level. The problem we face is analogous to the problem of extrapolating how fast an olympic sprinter can run from looking at a baby crawling around on the floor. Furthermore, the Cornell machine was given a physical system that was specifically chosen to be easy to analyze, and a representation (equations) that is known to be suited to the problem.
We can certainly state that the program analyzed some data much faster than any human could have done. In a running time probably measured in hours or minutes, it took a huge stream of raw position and velocity data and found the underlying conserved quantities. And given likely algorithmic optimizations and another 10 years' of Moore's law, we can safely say that in 10 years' time, that particular program will run in seconds on a $500 machine or milliseconds on a supercomputer. These results actually surprise me: an AI can automatically and instantly analyze a physical system (albeit a rigged one).
But, of course, one has to ask: how much more narrow-AI work would it take to actually look at video of some bouncing, falling and whirling objects and deduce a general physical law such as the earth's gravity and the laws governing air resistance, where the objects are not hand-picked to be easy to analyze? This is unclear. But I can see mechanisms whereby this would work, rather than merely having to submit to the overwhelming power of the word "superintelligence". My suspicion is that with current state-of-the-art object identification technology, video footage of a system of bouncing balls and pendulums and springs would be amenable to this kind of analysis. There may even be a research project in that proposition.
As far as extrapolating the behavior of a superintelligence from the behavior of the Cornell AI or the Adam robot, we should note that no human can look at a complex physical system for a few seconds and just write down the physical law or equation that it obeys. A simple narrow AI has already outperformed humans at one specific task; though it still cannot do most of what a scientist does. We should therefore update our beliefs to assign more weight to the hypothesis that on some particular narrow physical modelling task, a "superintelligence" would vastly outperform us. Personally I was surprised at what such a simple system can do, though with hindsight it is obvious: data from a physical system follows patterns, and statistics can indentify those patterns. Science is not a magic ritual that only humans can perform, rather it is a specific kind of algorithm, and we should expect there to be no special injunction against silicon minds from doing it.
Yes, I was pointing out the significance of this pre-processing, not trying to imply you didn't mention it. "Would be harder to process" means they did most of the hard part before turning it over to the machine.
"Just"? I'm not sure you know what that words means ;-) The air functions as a thermodynamic reservoir ; you need precise equipment just to notice the change in air velocity and temperature, and even then, you've falling prey to exactly the criticism I made in my original comment. Simply by recognizing that temperature is relevant is itself difficult cognitive labor that you do for the machine. It can't be evidence of the machine's inferential capabilities except insofar as it has to account for one more variable.
And the more precise you have to be to notice this relevancy, the more cognitive labor you're doing for the machine.
First, they're going to ignore a nobody like me. But yes, I will stick my neck out on this one. If the same measurement equipment is used, the same variables record, and the same huge prior given to "look for invariants", I claim their method will choke (to be precisely defined later).
Okay, maybe that's not what you meant. You meant that if you're going to do even more of the cognitive labor for the machine by adding on equipment that notices the variables necessary to make conservation-of-energy approaches work, then it can still find the invariant and discover the equation of motion.
But my point is, when you, the human, focus the machine's "attention" on precisely those observations that help the machine compress its description of its data, it's not the machine doing the cognitive labor; it's you.
Short answer: ditto.
Long answer: I think the biological sciences have been poor about expressing their results in a form that is conducive to the kind of regularity detection that machines like the Eureka machine do.
And my point is that it flat out didn't once you consider that the makers bypassed everything that humans had to do when discovering these laws and gave it as a neat package to the algorithm.
Given enough processing speed, sure. But the test for intelligence would normalize for elementary processing operations. That is, the machine is more intelligent if it didn't have to unnecessarily sweep through billions of longer hypotheses to get to the right one.
But hold on: if you truly do start from an untainted Occamian prior, you have to rule out many universes before you get to this one. In short, we don't actually want truly general intelligence. Rather, we want intelligence with a strong prior tilted toward the working of this universe.
But it did do something faster than a human could have done. I don't claim that it invented physics: I claim that it quickly discovered the conserved quantities for a particular system albeit a system that was chosen in advance to be easy. But if I gave you the raw data that it had, and asked you to by hand write down a conserved quantity, you would take years.