Notwithstanding the tendentious assumption in the other comment thread that courts are maximally adversarial processes bent on on misreading legislation to achieve their perverted ends, I would bet that the relevant courts would not in fact rule that a bunch of deepfaked child porn counted as "Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive", where those other things are "CBRN > mass casualties", "cyberattack on critical infra", and "autonomous action > mass casualties". Happy to take such a bet at 2:1 odds.
But there are some simpler reason that particular hypothetical fails:
See:
(2) “Critical harm” does not include any of the following:
(A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.
Child porn is frequently used to justify all sorts of highly invasive privacy interventions. ChatGPT * seems to think it would be a public safety thread under Californian law.
Existing models can do pictures but not video. A complex multimodal model might be able to do video porn.
Better models might produce deep fake audio with less data and at nearer to how the person actually speaks.
There's also the question of whether deep fake porn or faked audio is "accessible information" in the sense of that paragraph (2) (A). That paragraph clearly absol...
I’m not sure if you intended the allusion to “the tendentious assumption in the other comment thread that courts are maximally adversarial processes bent on on misreading legislation to achieve their perverted ends”, but if it was aimed at the thread I commented on… what? IMO it is fair game to call out as false the claim that
It only counts if the $500m comes from "cyber attacks on critical infrastructure" or "with limited human oversight, intervention, or supervision....results in death, great bodily injury, property damage, or property loss."
even if deep...
It only counts if the $500m comes from "cyber attacks on critical infrastructure" or "with limited human oversight, intervention, or supervision....results in death, great bodily injury, property damage, or property loss."
So emotional damages, even if severe and pervasive, can't get you there.
If you read the definition of critical harms, you’ll see the $500m doesn’t have to come in one of those two forms. It can also be “Other grave harms to public safety and security that are of comparable severity”.
If someone creates an automated system that makes deep fake porn and then emails with that porn to blackmail people and publishes the deep fake porn when people don't pay up, that could very well be a system with limited human oversight, intervention, or supervision.
Those people who pay the blackmail would also suffer from property loss.
If you have someone committing suicide because of deep fake porn images of themselves, it might also result in death.
If you have one suicide + $500,000,000 worth in emotional damage wouldn't it count?
Recently, there was a post on SB-1047 and how it's quite mild regulation. I'm not expert on it and don't know how it works.
In the comment section I was asking:
I'm surprised that my comment didn't get any engagement where people explained how they think the law will handle those cases while at the same time my post got no karma votes.
I'd love to believe that the law is well thought out, and simply a good step for AI safety. At the same time, I also like having accurate beliefs about the effects of the law, so let me repeat my question here.
How does the law handle damage caused by deep fake porn or fraud with voice cloning?