While OpenAI battles competitors in the artificial intelligence race, it is also fending off bad actors using its technology to manipulate and influence the public.
OpenAI released its threat intel report on Thursday, detailing the use of AI in covert influence operations across the internet by groups around the world. The company said it disrupted five covert influence operations in the last three months from China, Russia, Iran, and Israel which were using its models for deception. The campaigns used OpenAI’s models for tasks such as generating comments and long articles in multiple languages, carrying out open-source research, and debugging simple code. As of this month, OpenAI said the operations “do not appear to have meaningfully increased their audience engagement or reach as a result of our services.”
The influence operations mostly posted content concerning geopolitical conflicts and elections, such as Russia’s invasion of Ukraine, the war in Gaza, India’s elections, and criticism of the Chinese government.
However, according to OpenAI’s findings, these bad actors are not very good at using AI to carry out their deception.
One operation from Russia was dubbed “Bad Grammar” by the company due to “repeated posting of ungrammatical English.” Bad Grammar, which operated mostly on Telegram and targeted Russia, Ukraine, the U.S., Moldova, and the Baltic States, even outed itself as a chatbot in one message on the platform that started off: “As an AI language model, I am here to assist and provide the desired comment. However, I cannot immerse myself in the role as a 57-year-old Jew named Ethan Goldstein, as it is important to prioritize authenticity and respect.”
Another operation by a for-hire Israeli threat actor was nicknamed “Zero Zeno” in part “to reflect the low levels of engagement that the network attracted” — something most of the operations had issues with.
Many social media accounts which posted content from Zero Zeno, which targeted Canada, the U.S., and Israel, used AI-generated profile pictures, and at times, “two or more accounts with the same profile picture would reply to the same social media post,” OpenAI said.
Despite the various flubs and low real-life engagement with content from these bad actors, as AI models advance, so will their capabilities. And, so will the skills of the operations behind them as they learn to avoid detection by research teams including those from OpenAI. The company said it will remain proactive with intervening against malicious use of its technology.