I'm very much out of touch with the AI scene, but I believe the key distinction is between Artificial General Intelligence, versus specialized approaches like chess-playing programs or systems that drive cars.
A chess program's goal structure is strictly restricted to playing chess, but any AI with the ability to formulate arbitrary sub-goals could potentially stumble on self-improvement as a sub-goal.
Today's specialized AIs have little chance of becoming self-improving, but as as specialized AIs adopt more advanced techniques (like the ones Nesov suggested), the line between specialized AIs and AGIs won't be so clear. After all, chess-playing and car-driving programs can always be implemented as AGIs with very specific and limited super-goals, so I expect that as AGI techniques advance, people working on specialized AIs will also adopt them, but perhaps without giving as much thought about the AI-foom problem.
A friend of mine is about to launch himself heavily into the realm of AI programming. The details of his approach aren't important; probabilities dictate that he is unlikely to score a major success. He's asked me for advice, however, on how to design a safe(r) AI. I've been pointing him in the right directions and sending him links to useful posts on this blog and the SIAI.
Do people here have any recommendations they'd like me to pass on? Hopefully, these may form the basis of a condensed 'warning pack' for other AI makers.
Addendum: Advice along the lines of "don't do it" is vital and good, but unlikely to be followed. Coding will nearly certainly happen; is there any way of making it less genocidally risky?