Connect with us

News

Meta Announces Plans to Label AI-Generated Content Starting in May

Published

on

Meta’s new "Made with AI" labels will identify content created or altered with AI, including video, audio, and images © SEBASTIEN BOZON / AFP
Meta, the parent company of Facebook and Instagram, announced on Friday its plan to implement labelling for AI-generated media starting in May.

This move aims to address concerns surrounding deepfakes and provide transparency to users and governments.

As part of this strategy, Meta will no longer remove manipulated images and audio that do not violate its rules. Instead, it will focus on labelling and providing context to such content while safeguarding freedom of speech.

The decision follows criticism from Meta’s oversight board, which urged the company to revamp its approach to manipulated media due to advancements in AI technology, leading to highly convincing deepfakes.

This change comes amid worries about the misuse of AI-powered tools for spreading disinformation, especially during crucial election periods globally, including in the United States.

Meta’s upcoming “Made with AI” labels will specifically identify content, including videos, audio clips, and images, that have been created or altered using AI technology. Content deemed highly misleading will receive more prominent labels to alert the public.

Monika Bickert, Meta’s Vice President of Content Policy, emphasised the importance of transparency and context in addressing such content, aligning with the oversight board’s recommendations.

These labeling efforts align with an industry-wide agreement reached in February among major tech companies and AI developers to combat manipulated content designed to deceive voters.

Meta plans to roll out the AI-generated content labelling in two phases, starting in May 2024. Meanwhile, the removal of manipulated media under the old policy will stop in July unless it violates other Community Standards like hate speech or voter interference.

The prevalence of convincing AI deepfakes has heightened concerns, prompting action to mitigate their potential impact on misinformation and public perception.

The oversight board’s recommendations stemmed from Meta’s handling of a manipulated video involving US President Joe Biden and incidents like robocall impersonations, underlining the urgent need for improved content moderation strategies.

In light of these developments, Meta’s proactive steps towards transparency and accountability signal a broader industry shift towards responsible AI use and content management.

author avatar
Sola Adeniji
News Reporter, Freelancer, and content creator

Trending

Social Media Auto Publish Powered By : XYZScripts.com