Would the first AI want more AI's around? Wouldn't it compete more with AI's than with humans for resources? Or do you assume that humans, having made an AI smarter than an individual human, would work to network AI's into something even smarter?
Either way, the scaling issue is interesting. I would expect the gain from networking AI's to differ from the gain from networking humans, but I'm not sure which would work better. Differences among individual humans are a potential source of conflict, but can also make the whole greater than the sum of the parts. I wouldn't expect complementarity among a bunch of identical AI's. Generating useful differences would be an interesting problem.
If there is more to be gained by adding an additional AI then there is to be gained by scaling up the individual AI, then the best strategy for the AI is to create more AI's with the same utility function.
Edited to add: Unless, perhaps, the AI had an explicit dislike of creating others, in which case it would be a matter of which effect was stronger.
See also: Challenging the Difficult and Tips and Tricks for Answering Hard Questions.
From Michael Nielsen's Reinventing Discovery:
This episode is a microcosm of how intellectual progress happens.
Humanity's intellectual history is not the story of a Few Great Men who had a burst of insight, cried "Eureka!" and jumped 10 paces ahead of everyone else. More often, an intellectual breakthrough is the story of dozens of people building on the ideas of others before them, making wrong turns, proposing and discarding ideas, combining insights from multiple subfields, slamming into brick walls and getting back up again. Very slowly, the space around the solution is crowded in by dozens of investigators until finally one of them hits the payload.
The problem you're trying to solve may look impossible. It may look like a wrong question, and you don't know what the right question to ask is. The problem may have stymied investigators for decades, or centuries.
If so, take heart: we've been in your situation many times before. Almost every problem we've ever solved was once phrased as a wrong question, and looked impossible. Remember the persistence required for science; what "thousands of disinterested moral lives of men lie buried in its mere foundations; what patience and postponement... are wrought into its very stones and mortar."
"Genius is 1 percent inspiration, 99 percent perspiration," said Thomas Edison, and he should've known: It took him hundreds of tweaks to get his incandescent light bulb to work well, and he was already building on the work of 22 earlier inventors of incandescent lights.
Pick any piece of progress you think of as a "sudden breakthrough," read a history book about just that one breakthrough, and you will find that the breakthrough was the result of messy progress like the Polymath Project, but slower: multiple investigators, wrong turns, ideas proposed and combined and discarded, the space around the final breakthrough slowly encroached upon from many angles.
Consider what I said earlier about the problem of Friendly AI:
So: Are you facing an impossible problem? Don't let that stop you, if the problem is important enough. Hack away at the edges. Look to similar problems in other fields for insight. Poke here and there and everywhere, and put extra pressure where the problem seems to give a little. Ask for help. Try different tools. Don't give up; keep hacking away at the edges.
One day you may hit the payload.