OpenAI just released a public announcement detailing how they caught and disrupted several cases of ongoing misuse of their models by state-affiliated threat actors, including some known to be affiliated with North Korea, Iran, China, and Russia.

This is notable because it provides very tangible evidence of many kinds of misuse risk that many people in AI Safety had flagged in the past (like the use of LLMs for aiding in the development of spear-fishing campaigns), and it associates them with malicious state-affiliated groups.

The specific findings:

Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors were terminated.

These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks. 

Specifically: 

  • Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
  • Salmon Typhoon used our services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.
  • Crimson Sandstorm used our services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.
  • Emerald Sleet used our services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
  • Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.
New Comment
2 comments, sorted by Click to highlight new comments since:

Soooo... they caught and disrupted use by "state-affiliated threat actors" associated with a bunch of countries at odds with the US, but not any of the US' allies?

What an interesting coincidence.

Spoofing and false flag attacks are the name of the game here. We don't actually know if the election bots in 2016 were Russian, just that American agencies selected Russia as the casting target for the big public accusation. Authoritarian regimes regularly blame Western intelligence agencies for all sorts of domestic problems in order to legitimize their regime and deflect blame for what is actually an embarassing internal conflict, it wouldn't be surprising to see that it often goes both ways.

Notably, Microsoft contributed substantially, even though Microsoft itself is a state affiliated threat actor. Microsoft could have been all 5 of these and I doubt OpenAI would have had any chance of finding out themselves.