With this tool, users can embed an imperceptible digital watermark into their AI-generated images and identify if Imagen was used for generating the image, or even part of the image.
Being able to identify AI-generated content is critical to promoting trust in information. While not a silver bullet for addressing the problem of misinformation, SynthID is an early and promising technical solution to this pressing AI safety issue.
This technology was developed by Google DeepMind and refined in partnership with Google Research. SynthID could be expanded for use across other AI models and we plan to integrate it into more products in the near future, empowering people and organisations to responsibly work with AI-generated content.
SynthID uses two deep learning models — one for watermarking and another for identifying — which were trained together on a diverse set of images.
The combined model is optimised on a range of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark to the original content.
SynthID uses an embedded watermarking technology that adds a digital watermark directly into the pixels of AI-generated images, making it imperceptible to the human eye.
We designed SynthID so it doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colours, and saving with various lossy compression schemes — most commonly used for JPEGs.
SynthID can scan the image for its digital watermark and help users assess whether the content was generated by Imagen.
The tool provides three confidence levels for interpreting the results of watermark identification. If a digital watermark is detected, part of the image is likely generated by Imagen.
Note: The model used to produce synthetic images on this page may be different from the model used on Imagen and Vertex AI.