I think the whole point in AI research is to do something, not find out how humans do something.
Depends on who's doing the research and why. You're right that companies that want to sell software care about solving the problem, which is why that type of approach is so common. On the other hand, I'm reluctant to call a mostly brute-forced solution "AI research", even if it's useful computer programming.
When mysterious things cease to be mysterious they'll tend to resemble the way "X".
No, I think you're missing my point. X is uninteresting not because it is no longer mysterious, but because it has no large-scale structure and patterns. We could consider another novel-writing program Z that writes novels in some other interesting and complicated way that's different than how humans do it, but still has a rich and detailed structure.
Continuing with the flight analogy: rockets, helicopters, planes, and birds all have interesting ways of flying, whereas the "brute force" approach to flight, throwing a rock really really hard, is not that interesting.
Another example: optical character recognition. One approach is to have a database of hundreds of different fonts, put a grid on each character from each font, and come up with a statistical measure that figures out how close the scanned image is to each stored character by looking at the pixels that they have in common. This works and produces useful software, but that approach doesn't actually care about the different letterforms and shapes involved with them. It doesn't recognize that structure, even though that's what the problem is about.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I like "AI-complete", though it wouldn't surprise me if general symbol recognition and interpretation is easier than natural language, whereas all NP-complete problems are equivalent.
I kept my initial comment technical, without delving into the philosophical aspects of it, but now I can ramble a bit.
I suspect that general symbol recognition and interpretation is AI-complete, because of these issues of context, world knowledge, and quasi-unsupervised online learning.
I believe there is a generalized learning algorithm (or set of algorithms) that use (at minimum) frequencies and in-built biological heuristics that we use to approach the world. In this view, natural language generation and understanding is one manifestation of this more general learning system (or constantly updating pattern recognition, if you like, though I think there may be more to it than simple recognition). Symbol recognition and interpretation is another.
"Recognition" and "interpretation" are themselves slippery words that hide the how and the what of what it is we do when we see a symbol. Computational linguists and psycholinguistics have done a good job of demonstrating that we know very little of what we're actually doing when we process visual and auditory input.
You are right that AI-complete probably hides finer levels of equivalency classes, wrapped up in the messy issue of what we mean by intelligence. Still, it's a handy shorthand to refer to problems that may require this more general learning facility, about which we understand very little.