OpenAI found 'state-affiliated malicious actors' using ChatGPT for cyberattacks

OpenAI and Microsoft said they shut down state-affiliated accounts using their AI tools to carry out cyberattacks

We may earn a commission from links on this page.
OpenAI CEO Sam Altman
OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, U.S., May 16, 2023.
Photo: Elizabeth Frantz (Reuters)

OpenAI and Microsoft said Wednesday that they found and shut down OpenAI accounts belonging to “five state-affiliated malicious actors” using AI tools, including ChatGPT, to carry out cyberattacks.

The shut down accounts were associated with Chinese-affiliated Charcoal Typhoon (CHROMIUM) and Salmon Typhoon (SODIUM), Iran-affiliated Crimson Sandstorm (CURIUM), North Korea-affiliated Emerald Sleet (THALLIUM), and Russia-affiliated Forest Blizzard (STRONTIUM) according to OpenAI and Microsoft Threat Intelligence.

Advertisement

“These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks,” OpenAI said in a statement. OpenAI said its “findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks.”

Advertisement

Forrest Blizzard, a Russian military intelligence actor, used large language models (LLMs) to research “various satellite and radar technologies that may pertain to conventional military operations in Ukraine,” Microsoft said, and to support tasks like manipulating files “to potentially automate or optimize technical operations.”

Advertisement

Both Charcoal Typhoon and Salmon Typhoon, which has “a history of targeting US defense contractors, government agencies, and entities within the cryptographic technology sector,” used LLMs for to run queries on global intelligence agencies and various companies, generating code and identifying coding errors, and translating tasks.

Crimson Sandstorm, Emerald Sleet, and both Chinese-affiliated actors used OpenAI’s tools to generate content for phishing campaigns, OpenAI said.

Advertisement

“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said.

Although the research from both companies did not find “significant attacks” from actors using closely monitored tools, OpenAI and Microsoft laid out additional approaches to mitigating the growing risks of threat actors using AI to carry out similar tasks.

Advertisement

Both companies said they would continue monitoring and disrupting activities associated with threat actors, work with other partners in the industry to share information about the known use of AI by malicious actors, and inform the public and stakeholders of the use of their AI tools by malicious actors.