THE Google uses a invisible “watermark” to identify the content generated by artificial intelligence tools of the company and make it easier to distinguish texts generated by Gemini, for example, from texts written by humans.
The tool, called SynthIDalso serves to identify AI-generated videos or images and can help prevent the use of AI content for harmful purposes, such as spreading misinformation.
At the end of October, Google DeepMind – a division of the company responsible for artificial intelligence research – made an open source version of SynthID available, so that other generative AI developers can apply watermarks to their own models.
In a published article in Nature on October 23, the company demonstrated how SynthID outperformed other techniques used to identify AI-generated content. However, the researchers highlight that “watermarking” works best with longer, less factual responses – such as creating an email or text.
Google takes measures to circumvent AI hallucinations; see what has changed
This content was originally published on Google provides a “watermark” to identify AI-generated content on the CNN Brasil website.
Source: CNN Brasil

Charles Grill is a tech-savvy writer with over 3 years of experience in the field. He writes on a variety of technology-related topics and has a strong focus on the latest advancements in the industry. He is connected with several online news websites and is currently contributing to a technology-focused platform.