Last updated on April 8th, 2024 at 12:10 pm

Meta, the parent company, will label misleading content as “high-risk.”

Meta, the parent company of Facebook and Instagram, announced significant policy changes regarding digitally created and altered media. The changes come ahead of elections that will challenge Meta’s ability to detect deceptive content generated by AI technologies.

The company will begin labeling AI-generated videos, images, and audio as “Made with AI” starting in May on both Facebook and Instagram. This expands on a policy that previously focused only on a limited range of manipulated videos, as stated by Monika Bickert, the vice president of content policy, in a blog post.

Bickert stated that Meta would also introduce distinct and more visible labels for digitally altered media that presents a “particularly high risk of materially deceiving the public on a matter of importance,” regardless of whether AI or other methods were used to create it. Meta will start using these more prominent “high-risk” labels immediately, according to a spokesperson.

This approach marks a shift in how the company handles manipulated content, moving away from solely removing specific posts to leaving the content up while providing viewers with details about its creation.

Meta had previously announced a plan to identify images created using other companies’ generative AI tools by embedding invisible markers in the files, but did not specify a start date at the time.

A spokesperson stated that the labeling strategy would be implemented for content posted on Facebook, Instagram, and Threads. Different rules apply to its other services, such as WhatsApp and Quest virtual-reality headsets.

The changes are being implemented months ahead of the US presidential election in November, a period that technology researchers warn could see significant impacts from generative AI technologies. Political campaigns in countries like Indonesia have already begun using AI tools, pushing the boundaries of guidelines set by providers such as Meta and the leading generative AI firm, OpenAI.

In February, Meta’s oversight board criticized the company’s current rules on manipulated media as “incoherent” after reviewing a video from last year that altered real footage of Joe Biden to wrongly suggest inappropriate behavior by the US president.

The video remained online because Meta’s existing policy on “manipulated media” only prohibits misleadingly altered videos produced by artificial intelligence or those that make people appear to say words they never actually said.

The board suggested that the policy should also cover non-AI content, which can be just as misleading as AI-generated content, as well as audio-only content and videos depicting actions that were not actually performed.