Eliezer_Yudkowsky comments on Minds that make optimal use of small amounts of sensory data - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (19)
I was skeptical about Eliezer_Yudkowsky's assertion then. I'm skeptical of the work of the project in the Guardian link. And I'm still skeptical.
"But what's there to be skeptical about? The results are there for you to see!"
Er, kind of. One way you can produce artificial results in this field is to give the machine 89 of the 90 bits of the right hypothesis, where those 89 bits are the ones humans are pretty much born with, and then act surprised that it finds the 90th.
Two years ago, I saw a cool video on Youtube of a starfish robot that models itself and figures out how to move, supposedly an example of a self-aware machine that learns how to walk. Now, the machine is very impressive -- it actually looks alive.
But the reality is less interesting. It turns out that the builders fed it almost all of the correct model of itself, and all the robot had to do was solve for a few remaining parameters, then try some techniques heavily biased toward what would succeed. Interesting work (it's still in my YT favorites), but far from machine self-awareness and discovery of novel modes of locomotion.
I hope you can see where this is going: when you go to the link at the end of the Guardian video, yep, it's the same group.
The Eureka machine is, in a way, an example of the artificial results I described above. Notice how much cognitive labor the Cornell team does for the machine. First, they recognize that the huge amount of raw visual data can be concisely, losslessly compressed into a few variables. In other words, even given all the parts of the visual field that move, they have recognized how many of those degrees of freedom are constrained, and so don't need to be included in a varaible list that fully describes what's going on.
Second, they picked a system with heavy components and a short enough duration that you don't have to worry about energy loss due to aerodynamic drag. Such terms were not in the equations the machine discovered, which would have really put a crimp on its ability to find conservation laws. Remember, a reason it took so long for natural philosophers to notice the laws of motion is because air complicates things. You don't get to see regularity until you can focus on celestial bodies, dense/small objects, and vacuums -- which are a difficult engineering problem to create in a lab with pre-Scientific Revolution technology.
Third, they told it to look for invariants (conservation laws). Now, that's actually fair, because it's a rule you could feed a general-use AI. However, pick an average situation in your life. How hard is it to notice the invariants? Normally, that heuristic is not very good (unless you already know what to look for), but they gave it this heuristic in a situation pre-selected for its usefulness.
Remember, noticing the right hypothesis is half the battle. Once you've done enough to even bring the hypothesis to your attention, most of the cognitive labor is done.
This is impressive work, but, well, let's not get ahead of ourselves.
I agree with Silas Barta that the data cited is not support for what I said a Bayesian superintelligence could do. This is 5% intelligence and 95% rigged demo. A lot of AI work is like that.
Not support, or just not very much support. Surely Univac's superiority over humans at arithmetic and the strength of a tractor are some support.