Platform Update: May 2024

Social media giants have implemented improved AI disclosure measures to enhance transparency in content shared online.

Social media platforms Meta, LinkedIn, and TikTok have become the latest to implement enhanced AI disclosure requirements such as labels for AI-generated content in-stream.

These labels serve to alert users that the images they’re viewing are not real, thereby easing any confusion or stress regarding the authenticity of the content, which can sometimes be disturbing or unsettling. Particularly concerning are instances where misleading, yet convincing, AI-generated content could sway public opinion and stoke public fear, a pressing concern as we head towards important elections worldwide.

The ultimate aim of the disclosure requirements is to empower users with a deeper understanding of the usage and potential misuse of generative AI across the web, and consequently curb misinformation, avoid misconceptions, and prevent potential harm. 

And the importance of automated and immediate detection cannot be overstated, so that these labels can be attached before the content can gain viral traction.

As such, the introduction of these disclosure requirements marks an important step forward, and it’s reassuring to see these platforms taking the necessary measures to improve AI literacy and avoid potentially harmful outcomes.

Stay tuned for more updates across the social space in June.