In That alien message, Eliezer made some pretty wild claims:
My moral - that even Einstein did not come within a million light-years of making efficient use of sensory data.
Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense. A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis - perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration - by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.
They never suspected a thing. They weren't very smart, you see, even before taking into account their slower rate of time. Their primitive equivalents of rationalists went around saying things like, "There's a bound to how much information you can extract from sensory data." And they never quite realized what it meant, that we were smarter than them, and thought faster.
In the comments, Will Pearson asked for "some form of proof of concept". It seems that researchers at Cornell - Schmidt and Lipson - have done exactly that. See their video on Guardian Science:
'Eureka machine' can discover laws of nature - The machine formulates laws by observing the world and detecting patterns in the vast quantities of data it has collected
Researchers at Cambridge and Aberystwith have gone one step further and implemented an AI system/robot to perform scientific experiments:
Researchers at Aberystwyth University in Wales and England's University of Cambridge report in Science today that they designed Adam - they describe how the bot operates by relating how he carried out one of his tasks, in this case to find out more about the genetic makeup of baker's yeast Saccharomyces cerevisiae, an organism that scientists use to model more complex life systems. Using artificial intelligence, Adam hypothesized that certain genes in baker's yeast code for specific enzymes that catalyze biochemical reactions. The robot devised experiments to test these beliefs, ran the experiments, and interpreted the results.
The crucial question is: what can we learn about the likely effectiveness of a "superintelligent" AI from the behavior of these AI programs? First of all, let us be clear: this AI is *not* a "superintellgience", so we shouldn't expect it to perform at that level. The problem we face is analogous to the problem of extrapolating how fast an olympic sprinter can run from looking at a baby crawling around on the floor. Furthermore, the Cornell machine was given a physical system that was specifically chosen to be easy to analyze, and a representation (equations) that is known to be suited to the problem.
We can certainly state that the program analyzed some data much faster than any human could have done. In a running time probably measured in hours or minutes, it took a huge stream of raw position and velocity data and found the underlying conserved quantities. And given likely algorithmic optimizations and another 10 years' of Moore's law, we can safely say that in 10 years' time, that particular program will run in seconds on a $500 machine or milliseconds on a supercomputer. These results actually surprise me: an AI can automatically and instantly analyze a physical system (albeit a rigged one).
But, of course, one has to ask: how much more narrow-AI work would it take to actually look at video of some bouncing, falling and whirling objects and deduce a general physical law such as the earth's gravity and the laws governing air resistance, where the objects are not hand-picked to be easy to analyze? This is unclear. But I can see mechanisms whereby this would work, rather than merely having to submit to the overwhelming power of the word "superintelligence". My suspicion is that with current state-of-the-art object identification technology, video footage of a system of bouncing balls and pendulums and springs would be amenable to this kind of analysis. There may even be a research project in that proposition.
As far as extrapolating the behavior of a superintelligence from the behavior of the Cornell AI or the Adam robot, we should note that no human can look at a complex physical system for a few seconds and just write down the physical law or equation that it obeys. A simple narrow AI has already outperformed humans at one specific task; though it still cannot do most of what a scientist does. We should therefore update our beliefs to assign more weight to the hypothesis that on some particular narrow physical modelling task, a "superintelligence" would vastly outperform us. Personally I was surprised at what such a simple system can do, though with hindsight it is obvious: data from a physical system follows patterns, and statistics can indentify those patterns. Science is not a magic ritual that only humans can perform, rather it is a specific kind of algorithm, and we should expect there to be no special injunction against silicon minds from doing it.
I was skeptical about Eliezer_Yudkowsky's assertion then. I'm skeptical of the work of the project in the Guardian link. And I'm still skeptical.
"But what's there to be skeptical about? The results are there for you to see!"
Er, kind of. One way you can produce artificial results in this field is to give the machine 89 of the 90 bits of the right hypothesis, where those 89 bits are the ones humans are pretty much born with, and then act surprised that it finds the 90th.
Two years ago, I saw a cool video on Youtube of a starfish robot that models itself and figures out how to move, supposedly an example of a self-aware machine that learns how to walk. Now, the machine is very impressive -- it actually looks alive.
But the reality is less interesting. It turns out that the builders fed it almost all of the correct model of itself, and all the robot had to do was solve for a few remaining parameters, then try some techniques heavily biased toward what would succeed. Interesting work (it's still in my YT favorites), but far from machine self-awareness and discovery of novel modes of locomotion.
I hope you can see where this is going: when you go to the link at the end of the Guardian video, yep, it's the same group.
The Eureka machine is, in a way, an example of the artificial results I described above. Notice how much cognitive labor the Cornell team does for the machine. First, they recognize that the huge amount of raw visual data can be concisely, losslessly compressed into a few variables. In other words, even given all the parts of the visual field that move, they have recognized how many of those degrees of freedom are constrained, and so don't need to be included in a varaible list that fully describes what's going on.
Second, they picked a system with heavy components and a short enough duration that you don't have to worry about energy loss due to aerodynamic drag. Such terms were not in the equations the machine discovered, which would have really put a crimp on its ability to find conservation laws. Remember, a reason it took so long for natural philosophers to notice the laws of motion is because air complicates things. You don't get to see regularity until you can focus on celestial bodies, dense/small objects, and vacuums -- which are a difficult engineering problem to create in a lab with pre-Scientific Revolution technology.
Third, they told it to look for invariants (conservation laws). Now, that's actually fair, because it's a rule you could feed a general-use AI. However, pick an average situation in your life. How hard is it to notice the invariants? Normally, that heuristic is not very good (unless you already know what to look for), but they gave it this heuristic in a situation pre-selected for its usefulness.
Remember, noticing the right hypothesis is half the battle. Once you've done enough to even bring the hypothesis to your attention, most of the cognitive labor is done.
This is impressive work, but, well, let's not get ahead of ourselves.
Vacuums and telescopes are Renaissance tech, it's true. Wikipedia tells me that the first laboratory vacuum was built in the year after Galileo's death, so I think we can rule out the relevance of vacuums. (Galileo did say ... (read more)