Images generated by artificial intelligence tools are becoming harder to distinguish from those humans have created. AI-generated images can proliferate misinformation in massive proportions, leading to the irresponsible use of AI. To that purpose, Google unveiled a new SynthID tool that can differentiate AI-generated images from human-created ones.
The tool, created by the DeepMind team, adds an imperceptible digital watermark to AI-generated images — like a signature. The same tool can later detect this watermark to point out which images were created by AI, even after modifications, like adding filters, compressing, changing colors, and more.
SynthID combines two deep learning models into one tool. One visually adds the watermark to the original content in an imperceptible manner to the naked eye and another identifies the watermarked images.
Currently, SynthID cannot detect all AI-generated images, as it is limited to those created with Google’s text-to-image tool, Imagen. But this is a sign of a promising future for responsible AI, especially if other companies adopt SynthID into their generative AI tools.
The tool will gradually roll out to Vertex AI customers using Imagen and is only available on this platform. However, Google DeepMind hopes to make it available in other Google products and to third parties soon.