NewsGoogle says Gemini can now identify AI images, but there's a big...

Google says Gemini can now identify AI images, but there’s a big problem

Google’s ventures into AI-powered invisible watermarking will be as good as visible watermarking. The company continues its week Gemini 3 News with the announcement that the AI ​​content detector, SynthID Detector, will be released in private beta for everyone to use.

This news coincides with the publication of Professional Nano BananaGoogle’s hugely popular AI image editor. The new Pro model offers many improvements, including the ability to create readable text and exclusive images to 4K. This is great for creators who use AI, but it also means that it will be harder than ever to identify AI-generated content.

Deepfakes existed long before generative artificial intelligence. But with artificial intelligence tools like those developed by google and OpenAI, anyone can create convincing fake content faster and cheaper than ever. This has led to a huge influx of AI content online, coming from a variety of fields. Lower IA slope to realistic deepfakes. OpenAI’s AI viral video app, sorawas another important tool that showed us how simple these AI tools are can be abused. This is not a new problem, but artificial intelligence has caused a dramatic escalation of the deep fake crisis.

This is why SynthID was created. Google introduced SynthID in 2023 and all AI models released since then have added these invisible watermarks to AI content. Google also adds a small, visible, flashing watermark, but it doesn’t help much if you’re quickly scrolling through social media and not carefully scanning each message. To prevent the deeply fake crisis (which the company helped to cause) from getting worse, Google is launching a new content identification tool based on artificial intelligence.

SynthID Detector does exactly what the name suggests; Analyzes images and can detect invisible SynthID watermarks. So in theory you could upload an image to Gemini and ask the chatbot if it was created with AI. But there’s one big problem: Gemini can only confirm if an image was created with Google’s AI, and not another company’s. Since there are many AI image and video templates available, this means Gemini probably can’t tell if it was AI created with a program other than Google.

Currently, you can only request images, but Google announced in a blog post that it plans to expand the options to video and audio. Although limited, such tools are still a step in the right direction. There are countless tools for AI detection, but none are perfect. Generative media models improve rapidly, sometimes too rapidly for the discovery tools to keep up. That’s why it’s extremely important to report any AI content you share online and remain skeptical of any suspicious images or videos you see in your feeds.

More information is available here. all in Gemini 3 AND What’s new in Nano Banana Pro?.