An innovative tool, developed in collaboration with Google Cloud, is now available to address the growing challenge of identifying synthetic images generated by artificial intelligence systems. In response to the growing popularity of AI-generated images that closely resemble real photographs, SynthID has been released as a beta version. This innovative technology embeds a virtually invisible digital watermark directly into the pixels of the image, making it imperceptible to the human eye but easily detectable for identification purposes.
Google launches a new tool to identify images generated by AI within the framework of the DeepMind project
SynthID: empowering responsible management of AI content in a world of advanced generative technologies
SynthID is initially being rolled out to a select group of Vertex AI customers using Image, a cutting-edge text-to-image model that transforms text input into highly
As generative AI technologies continue to advance, the line between AI-generated content and human-created content is blurring. While generative AI has immense creative potential, it also introduces new risks, including the spread of misinformation. Identifying AI-generated content is crucial so that people can distinguish between genuine and synthetic media, thus preventing the proliferation of false information.
Google Cloud is at the forefront of responsible AI development, being the first cloud provider to offer a tool that not only makes it easy to create AI-generated images responsibly, but also enables them to be securely identified. SynthID builds on Google’s commitment to responsible AI and was developed through a collaboration between Google DeepMind and Google Research.
While SynthID may not be foolproof against extreme image manipulations, it represents an important step in enabling individuals and organizations to handle AI-generated content responsibly. Furthermore, this technology has the potential to expand its application to other AI modalities beyond images, such as audio, video and text.
Improved identification and security of AI-generated content
Unlike traditional watermarks, which can be easily removed or compromised, SynthID’s approach does not compromise image quality and remains detectable even after common image modifications such as filtering, color adjustments, and lossy compression. It does this by using two deep learning models for watermarking and identification, trained together on a diverse set of images.
SynthID offers Vertex AI customers a comprehensive solution that combines identification and watermarking capabilities. It provides users with three levels of confidence to interpret the results of watermark identification, making it a valuable tool for identifying AI-generated content. Importantly, SynthID complements existing metadata-based image identification approaches, offering a reliable means of identification even when metadata is missing or corrupted.
In the pursuit of responsible AI-generated content, Google is committed to developing safe, reliable and adaptable solutions at every stage. SynthID is just one part of this commitment and will continue to evolve based on user feedback and emerging requirements. The potential for SynthID to be integrated into more Google products and extended to third parties in the near future is very promising, as it will allow people and organizations to responsibly interact with AI-generated content.