New Google tool flags AI-made images, text, and video

The tool, announced at the I/O 2025 developer conference, is built on watermarking technology developed by DeepMind.
Google has introduced a new tool, SynthID Detector, to identify AI-generated content across text, images, audio and video, in a move aimed at improving transparency and trust in artificial intelligence.
The tool, announced at the I/O 2025 developer conference, is built on watermarking technology developed by DeepMind.
Known as SynthID, the system embeds digital watermarks directly into content, making them invisible to the human eye but detectable by machines.
For images, SynthID modifies pixels in a way that does not affect their appearance. For text, it adjusts token patterns to create a digital signature.
The watermark remains detectable even after resizing, cropping or paraphrasing, making it more reliable than previous methods.
Users can access SynthID Detector through a browser-based interface that allows them to upload content and receive a probability score indicating whether the material is AI-generated and carries a SynthID watermark.
Currently, the tool is being made available to a limited number of partners and researchers. Google plans to expand access later this year to journalists, educators and content moderators.
During the launch, DeepMind engineers demonstrated the tool’s ability to identify AI-generated images even after heavy editing, highlighting its durability compared to traditional detection methods.
In addition, Google has open-sourced the SynthID technology, allowing third-party developers and other AI companies to adopt the same watermarking system. This could lead to the creation of a common industry standard for detecting AI-generated content.
"This isn’t just a Google problem, it’s a global one," said DeepMind CEO Demis Hassabis. "We want to empower the broader ecosystem to build responsibly, and that means giving them tools to mark and trace content at the source."
Despite its progress, Google admits SynthID Detector is not a complete solution. Some AI models may avoid embedding watermarks, and others might try to remove or hide them.
Google is exploring other options, including cryptographic watermarking, blockchain verification and global regulatory cooperation to address these gaps.