For me, the biggest most tangible risks from AI are from those AI (agent or tool AIs) that are connected to the real world and can influence human incentives, especially those that can work at a global level.
If you have a Tool AI connected to the real world and able to influence humans through strong incentives, wouldn't this be a very high risk even if they never become full AGI?
There are actual real-life examples doing this right now, most notoriously AIs used for trading financial assets. These AIs are optimizing for profit, by playing with one of the strongest human incentives: money. Changes in financial incentives worldwide can trigger bankruptcies, bank-runs, currency and economic collapses.
I am not claiming a single AI might do this (although it might if given enough resources), but we are far from understanding what happens when multiple competing AIs are trying to overperform each other on the financial markets while trying to capture the maximum profit.
A 2nd less dangerous kind are those AIs that optimize for engagement (ie: Facebook/YouTubue/Twitter/Instagram/TikTok Feeds). The risk here is maximizing engagement equals to capturing maximum human attention, which is a zero-sum game, capturing attention away from other real-life activities (like studying, researching, bonding, helping others). Add to these AIs the capability to create content (GPT-3/DALL-E) and you might create a human-slaving tool.
These things are happening right now, in front of our noses and we are completely unaware of the damage that they might be causing and can cause in the future.
For me, the biggest most tangible risks from AI are from those AI (agent or tool AIs) that are connected to the real world and can influence human incentives, especially those that can work at a global level.
If you have a Tool AI connected to the real world and able to influence humans through strong incentives, wouldn't this be a very high risk even if they never become full AGI?
There are actual real-life examples doing this right now, most notoriously AIs used for trading financial assets. These AIs are optimizing for profit, by playing with one of the strongest human incentives: money. Changes in financial incentives worldwide can trigger bankruptcies, bank-runs, currency and economic collapses.
I am not claiming a single AI might do this (although it might if given enough resources), but we are far from understanding what happens when multiple competing AIs are trying to overperform each other on the financial markets while trying to capture the maximum profit.
A 2nd less dangerous kind are those AIs that optimize for engagement (ie: Facebook/YouTubue/Twitter/Instagram/TikTok Feeds). The risk here is maximizing engagement equals to capturing maximum human attention, which is a zero-sum game, capturing attention away from other real-life activities (like studying, researching, bonding, helping others). Add to these AIs the capability to create content (GPT-3/DALL-E) and you might create a human-slaving tool.
These things are happening right now, in front of our noses and we are completely unaware of the damage that they might be causing and can cause in the future.