Boeing MCAS (https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Augmentation_System) is blaimed by more than 100 deaths. How much "AI" would a similar system need to include for a similar tragedy to count as "an event precipitated by AI"?
Great point - I'm not sure if that contained aspects which are similar enough to AI to resolve such a question. This source doesn't think it counts as AI (though it doesn't provide much of an argument for this) and I can't find reference to machine learning or AI on the MCAS page, though clearly one could use AI tools to develop an automated control system like this and I don't feel well positioned to judge whether it should count.
Such scenarios are at best smoke, not fire alarms.
When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.
What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable. ...
There is never going to be a time before the end when you can look around nervously, and see that it is now clearly common knowledge that you can talk about AGI being imminent, and take action and exit the building in an orderly fashion, without fear of looking stupid or frightened.
The article convincingly makes the weaker claim that there's no guarantee of a fire alarm, and provides several cases which support this. I don't buy the claim (which the article also tries to make) that there is no possible fire alarm, and such a claim seems impossible to prove anyway.
Whether it's smoke or a fire alarm, that doesn't really address the specific question I'm asking, in any case.
AI systems find ways to completely manipulate some class of humans, e.g. by making them addicted. Arguably, this is already happening on a wider scale to a smaller amount – people becoming “addicted” to algorithmically generated feeds.
Maybe the question could be concretized to the amount of time people spend on their devices on average?
That seems like a different question which is partially entangled with AI but not necessarily, as more screen time doesn't necessarily need to be caused by AI, and the harms are harder to evaluate (even the sign of the value of "more screen time" is probably disputed).
Some high-profile failures I think we won't get are related to convergent goals, such as acquiring computing power, deceiving humans into not editing you, etc. We'll probably get examples of this sort of thing in small scale experiments, that specialists might hear about, but if an AI that's deceptive for instrumental reasons causes $1bn in damages I think it will be rather too late to learn our lesson.
If an AI system or systems goes wrong in the near term and causes harm to humans in a way which is consistent with or supportive of alignment being a big deal, what might it look like?
I'm asking because I'm curious about potential fire-alarm scenarios (including things which just help to make AI risks salient to the wider public), and also looking to operationalise a forecasting question which is currently drafted as
to allow a clear and sensible resolution.