Here's the most succinct and high information thing I can contribute.
Right now, each of these AI systems you describe, if they are using deep-learning at all, is using a hand-rolled solution.
You may notice that the general problems these AI systems are trying to solve are all in very similar forms to each other. You have some [measurements] -> [some desired eventual outcome or desired classification]. You then need to subdivide the problem into separate submodules, and in many problems the submodules are going to be the same as everyone else's way to solve the problem.
For example, you are going to want to classify and segment the images from a video feed into a state space of [identity, locations]. So does everyone else.
Similarly at a broader level, even if some of your algorithms have a different state space, the form of your algorithm is the same as everyone else.
And when you talk about your higher level graph - especially for realtime control - your system architecture is actually going to be identical to everyone else's realtime system. You have a clock, you have deadlines, you have a directed graph, you have safety requirements. This code in particular is really expensive and difficult to get right - something you want to share with everyone else.
So the next major step forward is platforming. There will be some convergence to a few common platforms (and probably a round of platform wars than ultimately end up with 1-3 winners like every other format and tech war in the past). The platforms will handle:
a. Training and development of common components
b. Payment and cross-licensing agreements
c. Model selection and design
d. Compiling models to target-specific bytecode
e. Systems code for realtime system graphs
f. RTOS, driver components for realtime systems
g. (c&d) will have to be shared in common across a variety of neural network compute platforms. There's about 100 of them now, Google's "TPUs" are one of the earlier ones.
h. Probably housekeeping like DRM, updates, etc will end up getting platformed as well.
All this reuse means that larger and larger parts of AI systems will be shared with every other AI system. Moreover, common elements - solving the same problem - will automatically get better over time as the shared parts get updated. This is how you get to a really smart factory robot that doesn't get fooled by a piece of gum someone dropped - because it classifies it to [trash] because it's sharing that part of the system with other robotic systems.
There is no economic justification to individually make that robot able to ID unexpected pieces of debris, but if it's licensing a set of shared components that have this feature baked in, it will have that as well.
As a side note, this is why talk of a possible coming "AI winter" is bullshit. We may not reach AI sentience for many more decades, but there is still enormous room for forward progress.
Thanks for your reply! This is interesting, though I'm a little confused by some parts of it.
Is the following a good summary of your main point? A main feature of your model of AI development/deployment is that there will be many shared components of AI systems, perhaps owned by 1-3 companies, that get licensed out to people who want to use them. This is because many problems you want to solve with AI systems can be decomposed into the same kinds of subproblems, so you can reuse components that solve those subproblems many times, and there's extra incentiv...
Personal AI assistants seem to have one of the largest impacts (or at least "presence") mainly due to the number of users. The impact per person seems small - making life slightly more convenient and productive, maybe. Not sure if there is actually much impact on productivity. I wonder if there is any research on this. I haven't looked into it at all.
Relatedly, chatbots are certainly used a lot, but I'm uncertain about its current impacts beyond personal entertainment and wellbeing (and uncertain about the direction of the impact on wellbeing).
What 2026 looks like has a few relevant facts on the current impacts, and interesting speculation about the future impacts of personal assistants and chatbots. E.g. facts:
"in China in 2021 the market for chatbots is $420M/year, and there are 10M active users. This article claims the global market is around $2B/year in 2021 and is projected to grow around 30%/year."
I don't feel surprised by those stats, but I also hadn't really considered how big the market is.
The State of AI report is by far the best resource I've come across so far. Reading it led me to significantly update my models about how much ML systems are already being deployed. I was particularly surprised by military applications, e.g.
Also, AI-based facial recognition is in active use by governments in 50% of the world
I'd like to have a clearer picture of the domains in which AI systems have already been deployed - particularly those in which they are having the largest impacts on the world, and what those impacts are.
Some reasons why this might be useful:
I'm curious for people's thoughts, either for ideas about specific impacts, or for ways I can try to investigate this despite apparent difficulties in finding out how AI systems are actually being used for commercial, political and military applications.
My current best guesses:
Other contenders:
Domains I'm uncertain about (but in which Allan Dafoe suggested AI systems are increasingly being deployed):