I don't understand your question. Are you saying that my comment wasn't about AIs being like humans, or are you saying that it doesn't matter if software is only able to solve a set of problems that it wasn't designed for?
I am suggesting your comment implied to me you still compare AIs with humans a bit too much. We work to make software able to solve the set of problems it was designed for. This applies for Hello World, and for Singleton.
Turing's Test is from 1950. We don't judge dogs only by how human they are. Judging software by a human ideal is like a species bias.
Software is the new System. It errs. Some errors are jokes (witness funny auto-correct). Driver-less cars don't crash like we do. Maybe a few will.
These processes are our partners now (Siri). Whether a singleton evolves rapidly, software evolves continuously, now.
Crocker's Rules