I'd like to have a clearer picture of the domains in which AI systems have already been deployed - particularly those in which they are having the largest impacts on the world, and what those impacts are.
Some reasons why this might be useful:
- AI systems don't (currently) get deployed on large scales overnight (due to things like caution, bureaucracy, manufacturing bottlenecks). And AI systems that are highly useful in particular domains don't (currently) get developed overnight. So, all else being equal, the domains in which AI systems are being deployed today seem more likely to be the first domains in which more powerful versions of those systems begin to have large scale/transformative impacts. Knowing which domains these are seems useful, because coping with the transformative impacts of these particular future powerful AI systems seems a more urgent problem than coping with impacts of future powerful AI systems in general (the main reason I'm interested).
- It could help ground concerns about future AI systems more, which could encourage:
- AI researchers to care more about the societal impacts of their work, and
- society at large to take AI risk more seriously.
- (I think having answers to this question is likely to be useful in slow takeoff worlds, but much less useful/interesting if fast takeoff)
I'm curious for people's thoughts, either for ideas about specific impacts, or for ways I can try to investigate this despite apparent difficulties in finding out how AI systems are actually being used for commercial, political and military applications.
My current best guesses:
- selecting content on social media (due to the huge scale)
- surveillance in China, incl. of Uighurs (due to the obvious harm and large scale)
Other contenders:
- driverless cars (in the near future)
- aiding and making decisions for e.g. employment, loans, and criminal sentencing (but how large scale are AI systems being deployed here, and are they really having a big impact?)
- delivering personalised advertising
- medical diagnosis
- algorithmic trading
Domains I'm uncertain about (but in which Allan Dafoe suggested AI systems are increasingly being deployed):
- by authoritarian governments to shape online discourse (are they really using AI systems, and if so, what kind of algorithms?)
- for autonomous weapons systems (really? are we talking DL systems, or more like plain old control systems for e.g. missile guidance)
- for cyber tools and autonomous cyber capabilities?
- in education?
Thanks for your reply! This is interesting, though I'm a little confused by some parts of it.
Is the following a good summary of your main point? A main feature of your model of AI development/deployment is that there will be many shared components of AI systems, perhaps owned by 1-3 companies, that get licensed out to people who want to use them. This is because many problems you want to solve with AI systems can be decomposed into the same kinds of subproblems, so you can reuse components that solve those subproblems many times, and there's extra incentive to do this because designing those components is really hard. One implication of this is that progress will be faster than in a world where components are separately designed by different companies, because more training data per component so components will be able to generalise more quickly.
I guess I'm confused whether there is so much overlap in subproblems that this is how things will go.
Hmm, it seems this is a subproblem that only a smallish proportion of companies will want to solve (e.g. companies providing police surveillance software, contact tracing software, etc.) - but really, not that many economically relevant tasks involve facial recognition. But maybe I'm missing something?
Hmm, just because the abstract form of your algorithm is the same as everyone else's, this doesn't mean you can reuse the same algorithm... In some sense, it's trivial that abstract form of all algorithms is the same: [inputs] -> [outputs]. But this doesn't mean the same algorithm can be reused to solve all the problems.
The fact that companies exist seems like good evidence that economically relevant problems can be decomposed into subproblems that individual agents with human-level intelligence can solve. But I'm pretty uncertain whether economically relevant problems can be decomposed into subproblems that more narrow systems can solve? Maybe there's an argument in your answer that I'm missing?