This article is written solely in my personal capacity, and does not represent the views of any organisations I am affiliated with.

The UK and US governments have both secured voluntary commitments from many major AI companies on AI safety.[1]

These include having appropriate reporting mechanisms for both cybersecurity vulnerabilities and model vulnerabilities[2].

I took a look at how well organisations are living up to these commitments as of February 2024. This included reviewing what the processes actually are, and submitting test reports to see if they work.

Summary table

CompanyScore
Adobe πŸ‡ΊπŸ‡Έ6/20
Amazon πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§6/20
Anthropic πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§13/20
Cohere πŸ‡ΊπŸ‡Έ12/20
Google πŸ‡ΊπŸ‡Έ20/20
Google DeepMind πŸ‡¬πŸ‡§18/20
IBM πŸ‡ΊπŸ‡Έ5/20
Inflection πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§16/20
Meta πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§14/20
Microsoft πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§18/20
NVIDIA πŸ‡ΊπŸ‡Έ20/20
OpenAI πŸ‡ΊπŸ‡Έ πŸ‡¬πŸ‡§12/20
Palantir πŸ‡ΊπŸ‡Έ5/20
Salesforce πŸ‡ΊπŸ‡Έ9/20
Scale AI πŸ‡ΊπŸ‡Έ4/20
Stability AI πŸ‡ΊπŸ‡Έ1/20

Some high-level takeaways

Performance was quite low across the board. Simply listing a contact email and responding to queries would score 17 points, which would place a company in the top five.

However, a couple companies have great processes that can act as best practice examples. Both Google and NVIDIA got perfect scores. In addition, Google offers bug bounty incentives for model vulnerabilities and NVIDIA had an exceptionally clear and easy to use model vulnerability contact point.

Companies did much better on cybersecurity than model vulnerabilities. Additionally, companies that combined their cybersecurity and model vulnerability procedures scored better. This might be because existing cybersecurity processes are more battle tested, or taken more seriously than model vulnerabilities.

Companies do know how to have transparent contact processes. Every single company's press contact could be found within minutes, and was a simple email address. This suggests companies are able to sort this out when there are greater commercial incentives to do so.


For more details, see the full post.

  1. ^

    In the US, all companies agreed to one set of commitments which included:

    US on model vulnerabilities: Companies making this commitment recognize that AI systems may continue to have weaknesses and vulnerabilities even after robust red-teaming. They commit to establishing for systems within scope bounty systems, contests, or prizes to incent the responsible disclosure of weaknesses, such as unsafe behaviors, or to include AI systems in their existing bug bounty programs.

    In the UK each company submitted their own commitment wordings. The government described the relevant areas as follows:

    UK on cybersecurity: Maintain open lines of communication for feedback regarding product security, both internally and externally to your organisation, including mechanisms for security researchers to report vulnerabilities and receive legal safe harbour for doing so, and for escalating issues to the wider community. Helping to share knowledge and threat information will strengthen the overall community's ability to respond to AI security threats.

    UK on model vulnerabilities: Establish clear, user-friendly, and publicly described processes for receiving model vulnerability reports drawing on established software vulnerability reporting processes. These processes can be built into – or take inspiration from – processes that organisations have built to receive reports of traditional software vulnerabilities. It is crucial that these policies are made publicly accessible and function effectively.

  2. ^

    A model vulnerability is a safety or security issue relating to an AI model that isn't directly related to its cybersecurity. This could include vulnerability to jailbreaks, prompt injection attacks, privacy attacks, unaddressed potential for misuse, controllability issues, data poisoning attacks, bias and discrimination and general performance issues.

    This is based on the definition in the UK government's paper.

New Comment
1 comment, sorted by Click to highlight new comments since:

Good work.

One plausibly-important factor I wish you'd tracked: whether the company offers bug bounties.