I'm slightly confused by the AI's capabilities, so this may be irrelevant, but I'll try.
The AI isn't superintelligent.
But it can corrupt/hack any non heroic robots/drones/factories/people which is it exposed to, to the point where it could seriously fuck up Earth, WMD style.
And when it targeted the spaceship, it DIDN'T do this, it just targeted the spaceship (it didn't hop back to Earth and then try to take over those robots, drones and factories.)
So logically, it has some kind of targeting that made it destroy only the ship and not Earth.
Understanding how that targeting works and if it is possible to understand safely, would pretty much be crucial to making any suggestions about the AI. Here are several examples, with parallels to comparable story elements:
1: If the targeting is hard written into the executable, then the AI might simply attempt to go back and derelictify a spaceship which already has no people. So it parallels a spent artillery shell.
2: Or it might have an hostile targeting, where for instance, if the Heroes are at war with the Creators of the AI, and they run Vile AI.exe and say "Target your creators." and then the AI says "Of course." but then targets the people who said that. So it parallels an enemy soldier.
3: Or it might have spatial targeting, where it can only target people in a defined area, so if the Heroes are at war with the Creators of the AI, and they run Vile AI.exe and say "Target your creators." and then the AI says "Of course, please enter the coordinates of my creators." So it parallels an aimable bomb.
4: Or it might have smart targeting, where it can only target people in a defined area, so if the Heroes are at war with the Creators of the AI, and they run Vile AI.exe and say "Target your creators." and then the AI says "Of course." and figures out where it's creators are and attacks them, so it parallels a brutal mercenary.
5: Or it might not have any kind of targeting and the creators just got extremely lucky that it more or less did what it they wanted it to, in which case running it might result in just about anything, so it parallels a damaged nuclear bomb which may just release radiation, or may only go off conventionally but not detonate, or may detonate and destroy everything in a wide area.
6: Or even attempting to determine it's targeting is simply too dangerous, in which case you might as well assume it's 5, since that's probably the worst case.
1: Destroy. 2: Destroy. 3: Possible Keep, depending on the Heroes sense of ethics. 4: Possible Keep, depending on the Heroes sense of ethics. 5: Destroy. 6: Destroy.
A good analysis, bringing up a few points I hadn't explicitly considered. (Which is, after all, why I started this thread, even though I expected a karma hit for it.) I had been thinking of the AI's focus on the one particular ship to be primarily based on limited interplanetary bandwidth, but I'll probably end up adopting your 2, 3, or 4.
As a relatively minor aside; at this point in the plot, Our Heroes don't really have any idea who the AI's creators actually are. Even limiting the candidates to those with means, motive, and opportunity still leaves a fa...
One plot-thread in my pet SF setting, 'New Attica', has ended up with Our Heroes in possession of the data, software, and suchlike which comprise a non-sapient, but conversation-capable, AI. There are bunches of those floating around the solar system, programmed for various tasks; what makes this one special is that it's evil with a capital ugh - it's captured people inside VR, put them through violent and degrading scenarios to get them to despair, and tried keeping them in there, for extended periods, until they died of exhaustion.
Through a few clever strategies, Our Heroes recognized they weren't in reality, engineered their escape, and shut down the AI, with no permanent physical harm done to them (though the same can't be said for the late crew of the previous ship it was on). And now they get to debate amongst themselves - what should they do with the thing? What use or purpose could they put such a thing to, that would provide a greater benefit than the risk of it getting free of whatever fetters they place upon it?
This is somewhat of a different take than Eliezer's now-classic 'boxed AI' problem, such as the AI not being superintelligent, and having already demonstrated some aspects of itself by performing highly antisocial activities. However, it does have enough similarities that, perhaps, thinking about one might shed some light on the other.
So: Anyone want to create some further verses for something sung to the tune of 'Drunken Sailor'?
What shall we do with an evil AI?
What shall we do with an evil AI?
What shall we do with an evil AI?
Ear-lie in the future.
Weigh-hay and upgrade ourselves,
Weigh-hay and upgrade ourselves,
Weigh-hay and upgrade ourselves,
Ear-lie in the future.