I think we need to be very careful before extrapolating from primitive elevator control systems to superintelligent AI. I don't know how this particular elevator control system works, but probably it does have a goal, namely minimizing the time people have to wait before arriving at their target floor. If we built a superintelligent AI with this sort of goal it might have done all sorts of crazy thing. For example, it might create robots that will constantly enter and exit the elevator so their average elevator trips are very short and wipe out the human race just so they won't interfere.
"Real world AI" is currently very far from human level intelligence, not speaking of superintelligence. Dogs can learn what their owners want but dogs already have complex brains that current technology is not able of reproducing. Dogs also require displays of strength to be obedient: they consider the owner to be their pack leader. A superintelligent dog probably won't give a dime about his "owner's" desires. Humans have human values, so obviously it's not impossible to create a system that has human values. It doesn't mean it is easy.
I think we need to be very careful before extrapolating from primitive elevator control systems to superintelligent AI.
I am extrapolating from a general trend, and not specific systems. The general trend is that newer generations of software less frequently crash or exhibit unexpected side-effects (just look at Windows 95 vs. Windows 8).
If we want to ever be able to build an AI that can take over the world then we will need to become really good at either predicting how software works or at spotting errors. In other words, if IBM Watson would have start...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.