In This Story
OpenAI is tackling an increasingly pressing issue: fake images generated by its own technology.
The AI company announced on Tuesday that it is launching a tool to detect content created by its DALL-E 3 text-to-image generator. The company also said it’s opening applications for a first batch of testers for its image detection classifier, which “predicts the likelihood that an image was generated” by its DALL-E 3.
“Our goal is to enable independent research that assesses the classifier’s effectiveness, analyzes its real-world application, surfaces relevant considerations for such use, and explores the characteristics of AI-generated content,” OpenAI said in a statement.
OpenAI said internal testing of an early version of its classifier showed high accuracy for detecting differences between non-AI generated images and content created by DALL-E 3. The tool correctly detected 98% of DALL-E 3 generated images, while less than 0.5% of non-AI generated images were incorrectly identified as AI generated.
Edits to images, such as compression, cropping, and saturation, have “minimal impact” on the tool’s performance, according to OpenAI. However, the company said other types of modifications “can reduce performance.” The company also found that its tool doesn’t perform as well when distinguishing between DALL-E 3 generated images and content generated by other AI models.
“Election concern is absolutely driving a bunch of this work,” David Robinson, head of policy planning at OpenAI, told the Wall Street Journal. “It’s the number one context of concern that we hear about from policymakers.”
A poll of U.S. voters by the AI Policy Institute in February found 77% of respondents said that when it comes to AI video generators — such as OpenAI’s Sora, which is not available to the public yet — putting guardrails and safeguards in place to prevent misuse is more important than making the models more widely available. More than two-thirds of respondents said AI model developers should be held legally responsible for any illegal activity.
“That really points to how the public is taking this tech seriously,” Daniel Colson, the founder and executive director of AIPI, said. “They think it’s powerful. They’ve seen the way that technology companies deploy these models and algorithms and technologies, and it leads to completely society-transforming results.”
In addition to launching its detection tool, OpenAI said Tuesday it is joining the Steering Committee of C2PA, or the Coalition for Content Provenance and Authenticity, which is a widely used standard for certifying digital content. The company said it started adding C2PA metadata to images generated and edited by DALL-E 3 in ChatGPT and its API earlier this year, and will integrate C2PA metadata for Sora when it is widely released.