HackGPT

Malicious cyber-attackers have got their hands on AI. What happens next?
HackGPT

Hi, Quartz members,

No one has explained where the names came from, but they’re splendid: Charcoal Typhoon, Salmon Typhoon, Crimson Sandstorm, Emerald Sleet, and Forest Blizzard.

Sound as they might like exotic weather phenomena, they are in fact state-affiliated malicious users of OpenAI’s services. The technical term for them, in OpenAI’s statement from earlier this week, is “threat actors.” All their accounts have now been disabled—but not before cybersecurity experts got a measure of who they were, and what kind of damage they sought to perpetrate. (More on that below.)

The nature of these threat actors isn’t unfamiliar. You may remember them from various prequels: Russian cyber-interference in the 2016 US election, for example, or the Iran-based ransomware attacks in 2021. But their abilities have been greatly enhanced now by the kind of large language models (LLMs) and generative artificial intelligence tools that companies like OpenAI build.

Already, cybersecurity experts have visualized some ways in which hackers and malicious actors gain from LLMs. The volumes of data that an AI model can scan are exponentially larger than anything a human or second-order software program can do. “AI algorithms can be trained to generate polymorphic malware that constantly mutates its code, making it challenging for antivirus software to detect and block,” Amani Ibrahim, a cybersecurity expert, wrote on LinkedIn last September.”[A]dversarial attacks can bypass security measures, such as intrusion detection systems or malware scanners, by generating malicious inputs that are indistinguishable from legitimate ones.”

It’s like she was peeking into the future, because that was exactly what Salmon Typhoon and the four other entities were doing in their misuse of OpenAI.


ALL THE MISCHIEF

What were these meteorological threat actors attempting to achieve?

🌲 Forest Blizzard is a Russian military intelligence actor that has targeted defense, government, nonprofit, and IT organizations through the course of the Ukraine war. It used LLMs to manipulate files and streamline its technical operations, but also to find out more about satellite capabilities and radar technologies.

🐠 Salmon Typhoon, a Chinese threat actor, deploys malware that gives it remote access to compromised systems. It used LLMs to “translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system,” OpenAI said.

◼️ Charcoal Typhoon, another Chinese actor, used OpenAI services for research and code debugging, and to create content to be used in phishing campaigns.

🌧️ Emerald Sleet is a North Korean entity that sends out phishing emails to prominent experts on North Korea, with a view to compromising or gathering intelligence from them. It used OpenAI LLMs to identify such experts, as well as to draft content for its phishing expeditions.

🟥 Crimson Sandstorm, an Iranian entity connected with the Islamic Revolutionary Guard Corps, also generated content for phishing emails. Additionally, it used LLMs to research ways in which malware might go undetected.


ONE BIG NUMBER

300: The number of unique threat actors that Microsoft Threat Intelligence tracks. This includes 160 nation-state actors and 50 ransomware groups.


THE GUN GOES OFF

Perhaps one day we’ll look back upon this week’s revelations about the typhoons, sandstorm, sleet, and blizzard as something like the starting pistol: the beginning of an AI arms race. As cyber-attackers co-opt complex AI models, companies like OpenAI will have to build more complex models still to come out on top—in turn prompting cyber-attackers to co-opt those... You get the idea.

This is, of course, excellent business for the companies themselves. Since they will also be building solutions to these cybersecurity problems, they will suddenly be in the business of supplying the disease as well as the cure, so to speak. And this is even before LLMs have been used by malicious actors in any truly original way. Right now, as Joseph Thacker, an AI engineer, told the cybersecurity news website Dark Reading, hackers are using AI merely speeding up their processing and expanding their scale.

“If a threat actor found a novel use case, it could still be in stealth and not detected by these companies yet, so it’s not impossible,” Thacker said. “I have seen fully autonomous AI agents that can ‘hack’ and find real vulnerabilities, so if any bad actors have developed something similar, that would be dangerous.”

As of now, OpenAI said in its statement, its GPT-4 model offers only “limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.” It left unsaid the obvious point. There will be a GPT-5, then a GPT-6, and beyond. Eventually, without a doubt, Thacker will encounter his truly novel use case for AI-driven cyberattack.


ONE 💻 THING

What AI taketh away with one hand, it giveth with the other. Chinese hackers have been trying to burrow into American transportation and infrastructure networks in stealthy ways—and AI has helped US intelligence investigators track these attacks. These digital incursions, one official said in January, would otherwise have been too difficult for a human to spot. Their particular modus operandi is to pass as “ordinary” traffic on the target networks, but they leave patterns all the same. And if there’s one thing AI tools do well, it’s spotting patterns; it’s at the heart of how LLMs learn and what they’re trained to do. But such AI-led cybersecurity tools are expensive, and experts are concerned that some companies and governments won’t be able to afford them and others will—creating a new “AI poverty line” in the process.


Thanks for reading! And don’t hesitate to reach out with comments, questions, or topics you want to know more about.

Have a weekend free of malice,

— Samanth Subramanian, Weekend Brief editor