In 1950, the very first paper ever written on computer chess, by Shannon, gave the algorithm that would play perfect chess given unlimited computing power. In reality, computing power was limited, so computers did not play superhuman chess until 47 years after that.
So knowing how to do something in principle doesn't always mean you can do it in practice without additional hard work and insights. Nonetheless it remains true that if you don't know how to play chess using unlimited computing power, you definitely can't play chess using limited computing power. In 1830, Edgar Allen Poe, who was also an amateur magician, carefully argued that no automaton could ever play chess, since at each turn there are many possible moves, but machines can only make deterministic motions. Between Poe in 1830 and Shannon in 1950 there was a key increase in the understanding of computer chess, represented by the intermediate work of Turing, Church, and others.
Similarly, in modern AI and especially in value alignment theory, there's a sharp divide between problems we know how to solve using unlimited computing power, and problems which are confusing enough that we can't even state the simple program that would solve them given a larger-than-the-universe computer. It is an alarming aspect of the current state of affairs that we know how to build a non-value-aligned hostile AI using unbounded computing power but not how to build a nice AI using unlimited computing power. The unbounded analysis program in value alignment theory centers on crossing this gap.
Besides the role of mathematization in producing conceptual understanding, it also helps build cooperative discourse. Marcus Hutter's AIXI marks not only the first time that somebody described a short Python program that would be a strong AI if run on a hypercomputer, but also the first time that anyone was able to point to a formal specification and say "And that is why this AI design would wipe out the human species and turn all matter within its reach into configurations of insignificant value" and the reply wasn't just "Oh, well, of course that's not what I meant" because the specification of AIXI was fully nailed down. While not every key issue in value alignment theory is formalizable, there's a strong drive toward formalism not just for conceptual clarity but also to sustain a cooperative process of building up collective debate and understanding of ideas.