We don't have that degree of understanding of the human brain, no. Sure, we know physics, but we don't know the initial conditions, even.
There are several layers of abstraction one could cram between our knowledge and conscious thoughts.
No, what I'm referring to is an algorithm that you completely grok, but whose execution is just too big. A bit like how you could completely specify the solution to the towers of Hanoi puzzle with 64 plates, but actually doing it is simply beyond your powers.
It's theoretically possible that an AI could result from that, but it seems vanishingly unlikely to me. I don't think an AI is going to come from someone hacking together an intelligence in their basement - if it was simple enough for a single human to grok, 50 years of AI research probably would have come up with it already. Simple algorithms can produce complex results, yes, but they very rarely solve complex problems.
Claim: The first human-level AIs are not likely to undergo an intelligence explosion.
1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this "AI" would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.
2) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.
3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it's source code to make it smarter, that doesn't automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.