Claim: The first human-level AIs are not likely to undergo an intelligence explosion.
1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this "AI" would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.
2) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.
3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it's source code to make it smarter, that doesn't automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.
In understanding how intelligence works? No.
Deep Blue just brute forces the game tree (more-or-less). Obviously, this is not at all how humans play chess. Deep Blue's evaluation for a specific position is more "intelligent", but it's just hard-coded by the programmers. Deep Blue didn't think of it.
Watson can "read", which is pretty cool. But:
1) It doesn't read very well. It can't even parse English. It just looks for concepts near each other, and it turns out that the vast quantities of data override how terrible it is at reading.
2) We don't really understand how Watson works. The output of a machine-learning algorithm is basically a black box. ("How does Watson think when it answers a question?")
There are impressive results which look like intelligence, which are improving incrementally over time. There is no progress towards an efficient "intelligence algorithm", or "understanding how intelligence works".
I can't remember right off hand, but there's some AI researcher (maybe Marvin Minsky?) who pointed out that people use the word "intelligence" to describe whatever humans can do for which the underlying algorithms are not understood. So as we discover more and more algorithms for do... (read more)