The FDA has a risk model and works to prevent the risks in the model. When a whistleblower comes to them to tell them about a risk that's not really in their risk model such as what happened in the Ranbaxy saga the FDA was awful at reacting.
The opioid epidemic is another example where the FDA was bad at reacting to a risk that's outside of what they initially considered.
Given that we don't have a good understanding of AI risk building an organization like the FDA that's highly restrictive for certain risks and very bureaucratic and bad at tackling other risks would be bad.
I like the idea of an "FDA but not awful" for AI alignment. However, the FDA is pretty severely captured and inefficient, and has pretty bad performance compared to a randomly selected WHO-listed authority; though that may not be the right comparison set, as you'd ideally want to filter to only those in countries with significant drug development research industries, I've heard the FDA is bad even by that standard. this is mentioned in the article but at the end and only briefly.
I am confused. It seems like this report does not acknowledge that the FDA should by most reasonable perspectives be considered a pretty major failure responsible for enormous harms to economic productivity and innovation.
Like, I think it's reasonable to disagree with that take and to think the FDA is good, but somehow completely ignoring the question of FDA efficacy seems kinda crazy.
The report is focussed on preventing harms of technology to people using or affected by that tech.
It uses FDA’s mandate of premarket approval and other processes as examples of what could be used for AI.
Restrictions to economic productivity and innovation is a fair point of discussion. I have my own views on this – generally I think the negative assymetry around new scalable products being able to do massive harm gets neglected by the market. I’m glad the FDA exists to counteract that.
The FDA’s slow response to ramping up COVID vaccines during the pandemic is questionable though, as one example. Getting a sense there is a lot of problems with bureacracy and also industrial capture with FDA.
The report does not focus on that though.
From the report:
The FDA model offers a power lesson in optimizing regulatory design for information production, rather than just product safety. This is urgently needed for AI given lack of clarity on market participants and structural opacity in AI development and deployment.
→ The FDA has catalyzed and organized an entire field of expertise that has enhanced our understanding of pharmaceuticals and creating and disseminating expertise across stakeholders far beyond understanding incidents in isolation. AI is markedly opaque in contrast: mapping the ecosystem of companies and actors involved in AI development (and thus subject to any accountability or safety interventions) is a challenging task absent regulatory intervention.
→ This information production function is particularly important for AI, a domain where the difficulty–even impossibility–of interpretability and explainability remain pressing challenges for the field and where key players in the market are incentivized against transparency. Over time, the FDA’s interventions have expanded the public’s understanding of how drugs work by ensuring firms invest in research and documentation to comply with a mandate to do so – prior to the existence of the agency, much of the pharmaceutical industry was largely opaque, in ways that bear similarities to the AI market.
→ Many specific aspects of information exchange in the FDA model offer lessons for thinking about AI regulation. For example, in the context of pharmaceuticals, there is a focus on multi-stakeholder communication that requires ongoing information exchange between staff, expert panels, patients and drug developers. Drug developers are mandated to submit troves of internal documentation which the FDA reformats for the public.
→ The FDA-managed database of adverse incidents, clinical trials and guidance documentation also offers key insights for AI incident reporting (an active field of research). It may motivate shifts in the AI development process, encouraging beneficial infrastructures for increasing transparency of deployment and clearer documentation.