Last updated on April 3rd, 2024 at 11:41 am

Nick Clegg, a global executive, says users seek clarity on boundaries amidst increased AI-generated content

Meta is striving to identify and label AI-generated images on Facebook, Instagram, and Threads as part of its effort to expose “people and organizations that deliberately seek to deceive others.”

While photorealistic images generated using Meta’s AI imaging tool are already flagged as AI, Nick Clegg, the company’s President of Global Affairs, revealed in a blog post on Tuesday that Meta plans to start labeling AI-generated images created on competing platforms.

Meta’s AI-generated images already include metadata and invisible watermarks that can indicate to other organizations that the image was created by AI. The company is also developing tools to detect these markers when used by other companies, including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock in their AI image generators, according to Clegg.

“As the line between human and synthetic content becomes less clear, people are seeking clarity about where that boundary lies,” Clegg explained. “Many individuals are encountering AI-generated content for the first time, and our users have expressed a desire for transparency regarding this new technology. Therefore, it’s crucial that we help users identify when photorealistic content they encounter has been produced using AI.”

Clegg mentioned that the capability is currently being developed, and the labels will be implemented in all languages in the coming months.

“We plan to implement this approach over the next year, a period that includes several significant elections worldwide,” Clegg stated.

He clarified that this labeling is currently limited to images, and AI tools that produce audio and video do not currently have these markers. However, the company will allow individuals to disclose and add labels to such content when it is posted online.

Additionally, Clegg stated that the company will introduce a more prominent label for “digitally created or altered” images, video, or audio that pose a significant risk of materially deceiving the public on important matters.

Meta is also exploring the development of technology to automatically detect AI-generated content, even in cases where the content lacks invisible markers or where these markers have been removed.

“This effort is crucial as the landscape is expected to become more adversarial in the years to come,” Clegg emphasized.

“Individuals and groups that seek to deceive others using AI-generated content will attempt to circumvent detection safeguards. Both within our industry and in society at large, we must continue to explore ways to stay ahead of these challenges.”

AI deepfakes have already emerged in the US presidential election cycle. There have been robocalls featuring what is believed to be an AI-generated deepfake of US President Joe Biden’s voice, discouraging voters from participating in the Democratic primary in New Hampshire.

Last week, Nine News in Australia received criticism for modifying an image of Victorian Animal Justice party MP Georgie Purcell to reveal her midriff and change her chest in an image broadcast during the evening news. The network attributed the alterations to “automation” in Adobe’s Photoshop software, which includes AI image tools.