One of the enduring traits that I see in most characterizations of artificial intelligences is the idea that an AI would have all of the skills that computers have. It's often taken for granted that a general artificial intelligence would be able to perfectly recall information, instantly multiply and divide 5 digit numbers, and handily defeat Gary Kasparov at chess. For whatever reason, the capabilities of a digital intelligence are always seen as encompassing the entire current skill set of digital machines.
But this belief is profoundly strange. Consider how much humans struggle to learn arithmetic. Basic arithmetic is really simple. You can build a bare bones electronic calculator/arithmetic logic unit on a breadboard in a weekend. Yet humans commonly spend years learning how to perform those same simple operations. And the mental arithmetic equipment humans assemble at the end of this is still relatively terrible: slow, labor intensive, and prone to frequent mistakes.
It is not totally clear why humans are this bad at math. It is almost certainly unrelated to brains computing using neurons instead of transistors. Based on personal experience and a cursory literature review, counting seems to rely primarily on identifying repeated structures in a linked list, and seems to be stored as verbal memory. When we first learn the most basic arithmetic we rely on visual pattern matching, and as we do more math basic math operations get stored in a look-up table in verbal memory. This is an absolutely bonkers way to implement arithmetic.
While humans may be generally intelligent, that general intelligence seems to be accomplished using some fairly inelegant kludges. We seem to have a preferred framework for understanding built on our visual and verbal systems, and we tend to shoehorn everything else into that framework. But there's nothing uniquely human about that problem. It seems to be characteristic of learning algorithms in general, and so if our artificial learner started off by learning skills unrelated to math, it might learn arithmetic via a similarly convoluted process. While current digital machines do arithmetic via a very efficient process, a digital mind that has to learn those patterns may arrive at a solution as slow and convoluted as the one humans rely on.
You are missing OP's point. OP is talking about arithmetic, and other things computers are really good at. There is a tendency, when talking about AI, to assume the AI will have all the abilities of modern computers. If computers can play chess really well, then so will AI. If computers can crunch numbers really well, then so will AI. That is what OP is arguing against.
If AIs are like human brains, then they likely won't be really good at those things. They will have all the advantages of humans of course, like being able to throw a ball or manage jiggly appendages. But they won't necessarily be any better than us at anything. If humans take ages to do arithmetic, so will AI.
There are some other comments saying that the AI can just interface with calculators and chess engines and gain those abilities. But so can humans. AI doesn't have any natural advantage there. The only advantage might be that it's easier to do brain-computer interfaces. Which maybe gets you a bit more bandwidth in usable output. But I don't see many domains where that would be very useful, vs humans with keyboards. Basically they would just be able to type faster or move a mouse faster.
And even your argument that humans are really good at analog math doesn't hold up. There have been some experiments done to see if humans could learn to do arithmetic better if it's presented as an analog problem. Like draw a line the same length as two shorter lines added together. Or draw a shape with the same area as two lines would make if formed into a rectangle.
Not only does it take a ton of training, but you are still only accurate within a few percent. Memorizing multiplication tables is easier and more accurate.
For how long?
One of the point of AIs is rapid change and evolution.