Recently, there was a post on SB-1047 and how it's quite mild regulation. I'm not expert on it and don't know how it works.
In the comment section I was asking:
Why wouldn't deep fake porn or voice cloning technology to engage in fraud be powerful enough to materially contribute to critical harm?
There are cases of fraud that could do $500,000,000 in damages.
Given how juries decide about damages, a model that's used to create child porn for thousands of children could be argued to cause $500,000,000 in damages as well. Especially when coupled with something like trying to extort the children.
I'm surprised that my comment didn't get any engagement where people explained how they think the law will handle those cases while at the same time my post got no karma votes.
I'd love to believe that the law is well thought out, and simply a good step for AI safety. At the same time, I also like having accurate beliefs about the effects of the law, so let me repeat my question here.
How does the law handle damage caused by deep fake porn or fraud with voice cloning?
I’m not sure if you intended the allusion to “the tendentious assumption in the other comment thread that courts are maximally adversarial processes bent on on misreading legislation to achieve their perverted ends”, but if it was aimed at the thread I commented on… what? IMO it is fair game to call out as false the claim that
even if deepfake harms wouldn’t fall under this condition. Local validity matters.
I agree with you that deepfake harms are unlikely to be direct triggers for the bill’s provisions, for similar reasons as you mentioned.