Wilka comments on Open Thread: July 2009 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (235)
What are some examples of recent progress in AI?
In several of Elizer's talks, such as this one, he's mentioned that AI research has been progressing at around the expected rate for problems of similar difficultly. He also mentioned that we've reached around the intelligence level of a lizard so far.
Ideally I'd like to have some examples I can give to people when they say things like "AI is never going to work" - the only examples I've been able to come up with so far have been AI in games, but they don't seem to think that counts because "it's just a game".
The Roomba is an example that seems to get a bit more respect (although it seems like a much simpler problem than many game AIs to me), but after that I pretty much run out of examples. Maybe I'm just not thinking hard enough because a lot of AI isn't called AI when it becomes mainstream?
Examples that are more 'geeky' would also be good for me, even if they would be dismissed by non-geeky people I meet.
I see 7 upvotes but no answers. Should I conclude that even those who think AI is attainable find nothing to boast of in the record so far?
I usually cite the DARPA Grand Challenge, which I gather was won using such advanced modern methods as particle filtering (a Bayesian technique).
Last time I read much about computer chess, the better programs were still relying primarily on brute-force search with some minor algorithmic optimizations to prune the search space, together with enormous databases for openings and endgames. Are there actually chess programs nowadays that deserve to be called intelligent?
Your first point -- that you can be easily killed or checkmated by a sufficiently powerful program regardless of how it is implemented -- is true but irrelevant: the question was not whether the program is powerful and effective (which I would not dispute) but whether it deserves to be called intelligent. You can say that whether it is intelligent or not is unimportant and that what matters is how effective it is, but it is wrong to conflate the two questions and pretend that an answer for one is an answer for the other, unless you are going to make an explicit argument that they are isomorphic or equivalent in some way.
I would argue that a problem domain where brute-force search with simple optimizations actually works extremely well is a problem domain that does not require intelligence. If brute-force search with a few optimizations is intelligent, then a program for factoring numbers is an artificial intelligence.
I don't have a criterion for intelligence in mind, but like porn, "I know it when I see it". We might disagree about edge cases, but almost all of us will agree that a number factoring program isn't "intelligent" in any interesting sense of the term. That's not to say that it might not be fantastically effective, or that a similarly dumb program with weapons as actuators might not be a formidable foe, but it's a different question to that of intelligence.
And the reason for that is simple - the real working definition of "intelligence" in our brains is something like, "that invisible quality our built-in detectors label as 'mind' or 'agency'". That is, intelligence is an assumed property of things that trip our "agent" detector, not a real physical quality.
Intuitively, we can only think of something as being intelligent, to the extent that it seems "animate". If we discover that the thing is not "animate", then our built-in detectors stop considering it an agency... in much the same way we stopped believing in wind spirits after figuring out weather, or that we historically would've needed to discern an accidental branch movement from the activity of an intelligent predator-agent.
So, even though a person without the appropriate understanding might perceive a thermostat as displaying intelligent behavior, as soon as they understand the thermostat's workings as a mechanical device, the brain stops labeling it as animate, and therefore considers it to be not "intelligent" any more.
This is one reason why it's really hard for truly reductionist psychologies to catch on: the brain resists grasping itself as mechanical, and insists on projecting "intelligence" onto its own mechanical processes. (Which is why we have oxymoronic terms like "unconscious mind", and why the first response many people have to PCT ideas is that their controllers are hostile entities trying to "control" them in the way a human agent might, rather than as a thermostat does.)
So, AI will always be in retreat, because anything we can understand mechanically, our brain will refuse to grant that elusive label of "mind". To our brains, something mechanically grasped cannot be an agent. (Which may lead to interesting consequences when we eventually fully grasp ourselves.)
You are wrong. Factoring large numbers has never been considered the pinnacle of true intelligence. Find me a reference if you expect me to believe that circa 1859 something so simple was considered as the pinnacle of anything.
I completely agree about the moving goalposts critique, and I think there is good AI and has been great progress, but when you find yourself defending the idea that a program that factors numbers is a good example of artificial intelligence, alarm bells should start ringing, regardless of whether you are talking about intelligence or optimization.
You said it was "considered to be the pinnacle of intelligence" 150 years ago, that is, almost 150 years after calculus was invented, and now you're interpreting that as meaning "a person on the street would think that intelligent." And you said I was moving goalposts?
It is a bad example, but it's a bad example because we could explain the algorithm to somebody in about 5 minutes.
I don't think we disagree. I just think that if chess programs are no more sophisticated now than they were 5 or 10 years ago, then they're poor examples of intelligence.