Meta says it will begin flagging AI-generated images on several company platforms ahead of the 2024 presidential election.
Nick Clegg, Meta's president of global affairs, announced Tuesday that images generated by AI tools and published on Facebook, Instagram, and Threads will be labeled as such in all languages the platforms support.
"As the difference between human and synthetic content gets blurred, people want to know where the boundary lies," Clegg wrote in a blog post. "People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.
"So it's important that we help people know when photorealistic content they're seeing has been created using AI. We do that by applying 'Imagined with AI' labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies' tools too."
The company said it is currently developing this capability and will start applying the labels "in the coming months."
"We're taking this approach through the next year, during which a number of important elections are taking place around the world," the blog post continued. "During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve."
When people create photorealistic images using the company's own AI image generator, Meta AI, Clegg said "we do several things to make sure people know AI is involved," including using visible markers on the image and "invisible watermarks and metadata embedded within image files."
Meta said Tuesday it will expand the labeling of AI-generated images to those created on other companies' platforms.
Though Clegg's blog post focused on images, he added the social media giant is "working with industry partners on common technical standards for identifying AI content, including video and audio."
The company's invisible markers for Meta AI images are aligned with the Partnership on AI's best practices, Clegg said.
Because companies have not started including signals in AI tools that generate audio and video content, Meta said it is not yet able to detect them and label the content as AI-generated.
Meta will therefore require users to "disclose when they share AI-generated video or audio," Clegg said.
"We'll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so," he said.
The social media giant is also working to "develop classifiers that can help … automatically detect AI-generated content, even if the content lacks invisible markers" and "looking for ways to make it more difficult to remove or alter invisible watermarks."
Nicole Wells ✉
Nicole Wells, a Newsmax general assignment reporter covers news, politics, and culture. She is a National Newspaper Association award-winning journalist.
© 2024 Newsmax. All rights reserved.