They certainly would have missed "Don't enslave people", because they were still enslaving people themselves - but this would mean it would be impossible to update the Roman AI for moral progress a few centuries down the line.
Historical notes. The Romans had laws against enslaving the free-born and also allowed manumission.
Many people think you can solve the Friendly AI problem just by writing certain failsafe rules into the superintelligent machine's programming, like Asimov's Three Laws of Robotics. I thought the rebuttal to this was in "Basic AI Drives" or one of Yudkowsky's major articles, but after skimming them, I haven't found it. Where are the arguments concerning this suggestion?