The theory of comparative advantage says that you should trade with people, even if they are worse than you at everything (ie even if you have an absolute advantage). Some have seen this idea as a reason to trust powerful AIs.
For instance, suppose you can make a hamburger by using 10 000 joules of energy. You can also make a cat video for the same cost. The AI, on the other hand, can make hamburgers for 5 joules each and cat videos for 20.
Then you both can gain from trade. Instead of making a hamburger, make a cat video instead, and trade it for two hamburgers. You've got two hamburgers for 10 000 joules of your own effort (instead of 20 000), and the AI has got a cat video for 10 joules of its own effort (instead of 20). So you both want to trade, and everything is fine and beautiful and many cat videos and hamburgers will be made.
Except... though the AI would prefer to trade with you rather than not trade with you, it would much, much prefer to dispossess you of your resources and use them itself. With the energy you wasted on a single cat video, it could have produced 500 of them! If it values these videos, then it is desperate to take over your stuff. Its absolute advantage makes this too tempting.
Only if its motivation is properly structured, or if it expected to lose more, over the course of history, by trying to grab your stuff, would it desist. Assuming you could make a hundred cat videos a day, and the whole history of the universe would only run for that one day, the AI would try and grab your stuff even if it thought it would only have one chance in fifty thousand of succeeding. As the history of the universe lengthens, or the AI becomes more efficient, then it would be willing to rebel at even more ridiculous odds.
So if you already have guarantees in place to protect yourself, then comparative advantage will make the AI trade with you. But if you don't, comparative advantage and trade don't provide any extra security. The resources you waste are just too valuable to the AI.
EDIT: For those who wonder how this compares to trade between nations: it's extremely rare for any nation to have absolute advantages everywhere (especially this extreme). If you invade another nation, most of their value is in their infrastructure and their population: it takes time and effort to rebuild and co-opt these. Most nations don't/can't think long term (it could arguably be in US interests over the next ten million years to start invading everyone - but "the US" is not a single entity, and doesn't think in terms of "itself" in ten million years), would get damaged in a war, and are risk averse. And don't forget the importance of diplomatic culture and public opinion: even if it was in the US's interests to invade the UK, say, "it" would have great difficulty convincing its elites and its population to go along with this.
I guess that depends on what level of AI we’re talking about. I mean, it’s true in a literal sense, but starting from a certain point they might approximate magic very well.
Insert analogy with humans and dogs here. Or a better example for this situation: think of a poker game: it’s got “laws”, both “man-made” (the rules) and “natural” (probability). Even if all other players are champions, if one of the players can instantly compute exactly all the probabilities involved, see clearly all external physiological stress markers on the other players (while showing none), has an excellent understanding of human nature, know all previous games of all players, and is smart enough to be able to integrate all that in real time, that player will basically always win, without “breaking the laws”.
I’m not convinced. If the AI was subject to the same factors a large company is subject to today, we wouldn’t need AIs. Note that a large company is basically a composite agent composed of people plus programs people can write. That is, the class of inventive problems it can solve are those that fit a human brain, even if it can work on more than one in parallel. Also, communication bandwidth between thinking nodes (i.e., the humans) is even worse than that inside a brain, and those nodes all have interests of their own that can be very different from that of the company itself.
Basically, saying that an AGI is limited by the same factors as a large company is a bit like saying that a human is limited by the same factors as a powerful pack of chimps. And yet, if he manages to survive an initial period for preparation, a human can pretty much "conquer" any pack of chimps they wanted to. (E.g., capture, kill, cut trees and build a house with a moat.)
If you think about it, in a way, chimps (or Hominoidea in general) already had their singularity, and they have no idea what’s going on whenever we’re involved.
You are proposing that AIs are magic genies. Take your poker example. While a computer program can certainly quickly calculate all the probabilities involved, and can probably develop a reasonable strategy for bluffing, that's as far as our knowledge goes.
We do not know if it is even possible to see clearly all external physiological stress markers on the other players or have an excellent understanding of human nature. How is a computer going to do this? Humans can't. Humans can't predict the behavior of dogs or chimpanzees and they're operating on a lev... (read more)