| Welcome to Global Village Space

Friday, May 17, 2024

Meta to introduce ‘Made with AI’ labels on its platforms

Meta had previously announced plans to employ invisible markers to detect images created using third-party generative AI tools.

In a bid to confront the rising tide of digitally created and altered media, Meta, the parent company of Facebook, has unveiled significant policy changes. These adjustments come ahead of pivotal US elections, where the company’s capacity to combat misleading content generated by emerging artificial intelligence technologies will be put to the test.

New Labeling Initiative

Meta’s Vice President of Content Policy, Monika Bickert, announced a groundbreaking initiative to introduce “Made with AI” labels to videos, images, and audio content posted on its platforms. This initiative expands upon a prior policy that primarily targeted a limited subset of manipulated media.

Read More: Meta’s Oversight Board advises to lift ban on the word ‘shaheed’

Targeting Deception

In a move signaling Meta’s commitment to combating deception, the company will also implement separate and more prominent labels for digitally altered media posing a “particularly high risk” of misleading the public. Importantly, these labels will be applied irrespective of whether the content was generated using AI or conventional methods.

Shift in Strategy

Meta’s revised approach represents a strategic pivot in its handling of manipulated content. Instead of solely focusing on content removal, the company will now prioritize transparency by retaining such content while providing viewers with critical information regarding its origins.

Technological Solutions

Meta had previously announced plans to employ invisible markers to detect images created using third-party generative AI tools. While a start date for this initiative wasn’t initially disclosed, it highlights the company’s multifaceted approach to combating deceptive media.

Implementation and Scope

The new labeling strategy will encompass content posted across Meta’s Facebook, Instagram, and Threads services. Immediate application of the “high-risk” labels underscores Meta’s urgency in addressing the proliferation of misleading content.

Election Implications

These policy changes arrive against the backdrop of impending US elections, where the deployment of generative AI technologies by political campaigns poses unprecedented challenges. Tech researchers have raised concerns about the potential transformation of the electoral landscape due to these advancements.

Read More: WhatsApp to add Meta AI into search bar for Android

Meta’s existing rules on manipulated media faced criticism from its own oversight board, which deemed them “incoherent.” A case in point was a manipulated video of US President Joe Biden, which was allowed to remain on the platform despite altering real footage to convey false implications.