The theory of comparative advantage says that you should trade with people, even if they are worse than you at everything (ie even if you have an absolute advantage). Some have seen this idea as a reason to trust powerful AIs.
For instance, suppose you can make a hamburger by using 10 000 joules of energy. You can also make a cat video for the same cost. The AI, on the other hand, can make hamburgers for 5 joules each and cat videos for 20.
Then you both can gain from trade. Instead of making a hamburger, make a cat video instead, and trade it for two hamburgers. You've got two hamburgers for 10 000 joules of your own effort (instead of 20 000), and the AI has got a cat video for 10 joules of its own effort (instead of 20). So you both want to trade, and everything is fine and beautiful and many cat videos and hamburgers will be made.
Except... though the AI would prefer to trade with you rather than not trade with you, it would much, much prefer to dispossess you of your resources and use them itself. With the energy you wasted on a single cat video, it could have produced 500 of them! If it values these videos, then it is desperate to take over your stuff. Its absolute advantage makes this too tempting.
Only if its motivation is properly structured, or if it expected to lose more, over the course of history, by trying to grab your stuff, would it desist. Assuming you could make a hundred cat videos a day, and the whole history of the universe would only run for that one day, the AI would try and grab your stuff even if it thought it would only have one chance in fifty thousand of succeeding. As the history of the universe lengthens, or the AI becomes more efficient, then it would be willing to rebel at even more ridiculous odds.
So if you already have guarantees in place to protect yourself, then comparative advantage will make the AI trade with you. But if you don't, comparative advantage and trade don't provide any extra security. The resources you waste are just too valuable to the AI.
EDIT: For those who wonder how this compares to trade between nations: it's extremely rare for any nation to have absolute advantages everywhere (especially this extreme). If you invade another nation, most of their value is in their infrastructure and their population: it takes time and effort to rebuild and co-opt these. Most nations don't/can't think long term (it could arguably be in US interests over the next ten million years to start invading everyone - but "the US" is not a single entity, and doesn't think in terms of "itself" in ten million years), would get damaged in a war, and are risk averse. And don't forget the importance of diplomatic culture and public opinion: even if it was in the US's interests to invade the UK, say, "it" would have great difficulty convincing its elites and its population to go along with this.
Laws, the costs of breaking them, the costs of making different ones, are just another optimization problem for businesses. Indeed, my singular insight about the intelligence services of nations is that the laws that constrain civilians within a country in commercial interactions are explicitly not applied to government intelligence agents and police generally, and especially when they are operating against other countries.
An AI will be as constrained by laws as would a similarly intelligent corporation. An AI which is much smarter than the collective intelligence of the best human corporations will be much less constrained by laws, especially as it accumulates wealth, which is essentially control of valuable tools.
One would expect in the mid term (as opposed to the long term) AI's to be part of corporations, that there would be an AI + human alliances which would be the most competitive.
If we get Kurzweil's future as opposed to the lesswrong orthodox future, AI will be integrated with human intelligence, that is, I will have modifications made to me that give me much higher intelligence than I have now. Conceivably at some point, the enhancements will have me jumping to a non-human substrate, but the line between what was unmodified human and what is clearly no longer human will be very hard to define. As opposed to the lesswrong vision which is AI's running off to the singularity while humans sit there paralyzed relying on their 1 kHz clocked parallel processor built entirely of meat. In which case the dividing line SEEMS much clearer.
Modified humans: human or not? I'm betting CEV when calculated will show that they are. I know I want to be smarter, how 'bout you?
And the laws of modified humans will be a whole lot more complex than the laws of bio-humans, just as the laws of humans are much more complex than the laws of monkeys.