SforSingularity comments on Minds that make optimal use of small amounts of sensory data - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (19)
Yes, I was pointing out the significance of this pre-processing, not trying to imply you didn't mention it. "Would be harder to process" means they did most of the hard part before turning it over to the machine.
"Just"? I'm not sure you know what that words means ;-) The air functions as a thermodynamic reservoir ; you need precise equipment just to notice the change in air velocity and temperature, and even then, you've falling prey to exactly the criticism I made in my original comment. Simply by recognizing that temperature is relevant is itself difficult cognitive labor that you do for the machine. It can't be evidence of the machine's inferential capabilities except insofar as it has to account for one more variable.
And the more precise you have to be to notice this relevancy, the more cognitive labor you're doing for the machine.
First, they're going to ignore a nobody like me. But yes, I will stick my neck out on this one. If the same measurement equipment is used, the same variables record, and the same huge prior given to "look for invariants", I claim their method will choke (to be precisely defined later).
Okay, maybe that's not what you meant. You meant that if you're going to do even more of the cognitive labor for the machine by adding on equipment that notices the variables necessary to make conservation-of-energy approaches work, then it can still find the invariant and discover the equation of motion.
But my point is, when you, the human, focus the machine's "attention" on precisely those observations that help the machine compress its description of its data, it's not the machine doing the cognitive labor; it's you.
Short answer: ditto.
Long answer: I think the biological sciences have been poor about expressing their results in a form that is conducive to the kind of regularity detection that machines like the Eureka machine do.
And my point is that it flat out didn't once you consider that the makers bypassed everything that humans had to do when discovering these laws and gave it as a neat package to the algorithm.
Given enough processing speed, sure. But the test for intelligence would normalize for elementary processing operations. That is, the machine is more intelligent if it didn't have to unnecessarily sweep through billions of longer hypotheses to get to the right one.
But hold on: if you truly do start from an untainted Occamian prior, you have to rule out many universes before you get to this one. In short, we don't actually want truly general intelligence. Rather, we want intelligence with a strong prior tilted toward the working of this universe.
... no?
\ >:-(