They also use a variant on this as an in-universe reason why genetic engineering of humans isn't done: people modified for increased intelligence tend to turn out evil, the way that Khan did. They haven't solved the biological enhanced intelligence alignment problem either.
Is it still as reassuring when you consider that despite knowing this, they still routinely run their computers at a level of capability that's just a few inches short of doing a treacherous turn? ;)
There's an early season TNG episode in which a race called the "Binars" upgrade the Enterprise-D's computer, and as a result Picard and Riker are amazed at how the Holodeck is now creating much more real-seeming characters. So making an AGI in the TNG era that can pass the Turing Test does seem to be a rare and difficult thing, although a ship's computer apparently does have hardware capable of generating one. So part of Dr. Soong's achievement also seems to be making a non-evil AGI that runs on hardware that can fit in a human-sized body instead of a ship-sized one.
In the Star Trek universe, we are told that it’s really hard to make genuine artificial intelligence, and that Data is so special because he’s a rare example of someone having managed to create one.
But this doesn’t seem to be the best hypothesis for explaining the evidence that we’ve actually seen. Consider:
There seems to be a pattern here: if an AI is built to carry out a relatively restricted role, then things work fine. However, once it is given broad autonomy and it gets to do open-ended learning, there’s a very high chance that it gets out of control. The Federation witnessed this for the first time with the Ultimate Computer. Since then, they have been ensuring that all of their AI systems are restricted to narrow tasks or that they’ll only run for a short time in an emergency, to avoid things getting out of hand. Of course, this doesn’t change the fact that your AI having more intelligence is generally useful, so e.g. starship computers are equipped with powerful general intelligence capabilities, which sometimes do get out of hand.
Dr. Soong’s achievement with Data was not in building a general intelligence, but in building a general intelligence which didn’t go crazy. (And before Data, he failed at that task once, with Lore.)
The Federation’s issue with AI is not that they haven’t solved artificial general intelligence. The Federation’s issue is that they haven’t reliably solved the AI alignment problem.