Most concerns about AI tend to boil down to:
- Loss of control to AI systems - What if AI were smarter than us and took over the world?
- Concentration of power - What if AI gave too much power to someone bad?
I'm surprised I haven't heard consideration of a third, more basic risk.
Will this technology be good?
Suppose you're in the Southern United States in 1793 and you believe that an important moral question—perhaps the most important moral question—is labor ethics of how cotton is processed. I suspect I don't need to go into detail about why you might think this.
Proceeding directly from this belief, an obviously good thing is, what if a machine could process cotton instead of people? Then at least the people who processed cotton wouldn't have to anymore.[1]
Imagine you work hard and, through grit and luck, it works!
What is your confidence interval of how much better this makes life?
I hope your interval included negative numbers. For some reason, I never hear negative numbers for how much better life would be if AI could do lots of jobs for us.
This is in fact exactly what happened. Eli Whitney tried to reduce enslaved labor by creating a replacement machine, but it worked backward.[2]
Isn't this "centralization of power"?
No,
- The gains from the gin were no more centralized than previous production surplus
- The problem wasn't its effect on processing. It was the indirect effect on cotton growing. And that didn't change at all in centralization.
- It would hold true regardless of centralization. The problem wasn't centralization, it was more cotton growing.
What can AI do that is bad?
I don't have clear answers. But the cotton gin should be enough of a cautionary tale about economics and unintended consequences.
A start is "tricking people". The central motive of all economic activity is to modify human behavior, usually involving a step where they give you money. Training a net costs money. How will you make that money back? The net will help you change people's behavior to give you their money. Not every way to change someone's behavior is good.
Another angle is "social change". If there's an invention that turns dirty water into clean water cheaper, it can change the math of economic activity and that can bubble up to have indirect social changes. AI is more direct. Its main successes have been text and image: abstract goods whose sole purpose is to feed directly into people's brains. It can change how people make decisions, directly, and in fact already does, in ads.
You've probably already thought hard about applications, and whether they'll be good or bad. But a meta point is: AI applications tend to go straight through people's brains more than most innovations in, say, physics. And inventions that change people's minds are the scariest, the most volatile, and the most likely to have unexpected effects.
I'm not sure there's a single bad thing that is analogous, like unemployment. I think the bigger point is, it's scary, and especially AI is volatile, and it's very unclear whether technologies are good in retrospect, for many reasons other than "they centralize power".
A more direct analogy might be, suppose AI does what people hope it does. What happens next? It's unfair to say about the cotton gin, "Imagine the manual labor were replace with a machine" and stop there. Specifically, prices will move and people will respond to those price changes. Generally, the environment will change, but people will adapt their own behavior to those changes.
It's not clear there aren't general principles that can be drawn. For example, any technology that makes it easier to remove clean water will, first order, cause there to be dirty water. Second order, it will probably cause more areas of land to be settled. We aren't sure about all the complex unforeseen consequences, but this seems like a good general rule of thumb. More land settled generally means more people and economic activity.