One of the enduring traits that I see in most characterizations of artificial intelligences is the idea that an AI would have all of the skills that computers have. It's often taken for granted that a general artificial intelligence would be able to perfectly recall information, instantly multiply and divide 5 digit numbers, and handily defeat Gary Kasparov at chess. For whatever reason, the capabilities of a digital intelligence are always seen as encompassing the entire current skill set of digital machines.
But this belief is profoundly strange. Consider how much humans struggle to learn arithmetic. Basic arithmetic is really simple. You can build a bare bones electronic calculator/arithmetic logic unit on a breadboard in a weekend. Yet humans commonly spend years learning how to perform those same simple operations. And the mental arithmetic equipment humans assemble at the end of this is still relatively terrible: slow, labor intensive, and prone to frequent mistakes.
It is not totally clear why humans are this bad at math. It is almost certainly unrelated to brains computing using neurons instead of transistors. Based on personal experience and a cursory literature review, counting seems to rely primarily on identifying repeated structures in a linked list, and seems to be stored as verbal memory. When we first learn the most basic arithmetic we rely on visual pattern matching, and as we do more math basic math operations get stored in a look-up table in verbal memory. This is an absolutely bonkers way to implement arithmetic.
While humans may be generally intelligent, that general intelligence seems to be accomplished using some fairly inelegant kludges. We seem to have a preferred framework for understanding built on our visual and verbal systems, and we tend to shoehorn everything else into that framework. But there's nothing uniquely human about that problem. It seems to be characteristic of learning algorithms in general, and so if our artificial learner started off by learning skills unrelated to math, it might learn arithmetic via a similarly convoluted process. While current digital machines do arithmetic via a very efficient process, a digital mind that has to learn those patterns may arrive at a solution as slow and convoluted as the one humans rely on.
It's a bit hard for RNN's to learn, but they can end up much better than humans. (Also, the reason it is being used as a challenge is because it is a bit tricky but not very tricky.)
It is probably also easy to "teach" humans to be much better at math than we currently are (over evolutionary time), there's just no pressure for math performance. That seems like the most likely difference between humans and computers.
After some engineering effort. Researchers didn't just throw a random RNN at the problem in 1990 and found they worked as great as transistors at arithmetic... Plus, if you want to pick extremes (the best RNNs now), are the best RNNs better at adding or multiplying extremely large numbers than human savants?