![a sign for Meta hangs on the side of a building](https://i.kinja-img.com/image/upload/c_fit,q_60,w_645/ca2b7740b565624c2e5422a7fad15ef4.jpg)
Meta will start labeling AI-generated images on Facebook, Instagram, and Threads in the coming months, citing “a number of important elections” coming up this year around the world.
More than 50 nations — accounting for half of the global population — will hold their contests in 2024. Ahead of the elections, all eyes are on how Meta will tackle interference and disinformation across its platforms.
Nick Clegg, president of global affairs at Meta, said in a statement the tech giant is working with its partners on building tools that can detect AI-generated content through “invisible markers” on images, like watermarks and metadata. AI companies, including OpenAI and Midjourney, are starting to add metadata to content generated with their tools. Meta already labels photorealistic content on its platforms created with its AI feature.
To combat the possibility that users will remove these invisible markers, Clegg said Meta is also developing techniques to automatically identify AI-generated content, and is searching for ways to make it more difficult for people to remove or alter the markers.
Clegg pointed out limits to the labeling system for AI-generated audio and video from other companies, which currently don’t have a system for invisible markers, adding that Meta will allow users to disclose if their audio and video has been digitally created or altered audio — and penalize those who who don’t.
“What we’re setting out today are the steps we think are appropriate for content shared on our platforms right now,” Clegg said in the statement. “But we’ll continue to watch and learn, and we’ll keep our approach under review as we do.”
On Monday Meta’s Oversight Board, which operates independently from Meta, ruled that the company’s decision to leave up an edited image of President Joe Biden “inappropriately touching his adult granddaughter’s chest” did not violate its Manipulated Media policy.
But the board criticized the policy, which only applies to AI-generated videos and “content showing people saying things they did not say,” calling it “incoherent” and “inappropriately focused on how content has been created, rather than on which specific harms it aims to prevent,” including elections.