Meta, the company behind Facebook, Instagram, and Threads, is collaborating with industry partners to establish technical standards for identifying AI-generated content, including images, videos, and audio.
In the coming months, images posted on these platforms will be labelled if they exhibit industry standard indicators of being AI-generated.
Meta AI, a generative tool, has sparked creativity, allowing users to create images with simple text prompts.
Photorealistic images from Meta AI are labelled as “Imagined with AI” to inform users about their origin.
Transparency is crucial as people encounter AI-generated content for the first time.
Meta employs visible markers, invisible watermarks, and embedded metadata in Meta AI images to signify AI involvement.
Working with industry partners like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock to develop common standards.
While strides are made in identifying AI-generated content, challenges remain in detecting all instances.
Introducing Stable Signature, an invisible watermarking technology integrated into the image generation process.
Meta’s Community Standards apply to all content, regardless of its AI origin.
AI-generated content is subject to fact-checking by independent partners, and debunked content is labelled for user awareness.
Meta acknowledges the responsibility that comes with AI development and aims for transparency and accountability.
The company emphasizes the continuous learning process and collaboration with industry forums like Partnership on AI.