Artificial intelligence is able to produce all kinds of images, including text and video. What can be done to determine if the creator of a creation is human? The question is not who created the video or image, but whether it represents real life situations or if the algorithm is at work.
It is crucial, for instance, to establish the authenticity of an image that, if artificially faked, could be used as a disinformation tool. Traceability is important for an image or film. To detect deepfakes and hypertrucage.
A watermark to designate creations generated by an AI ?
The “young mistakes” in technology are no more. We cannot rely on characters with a six-finger design or digital humanoids that were unable to have realistic legs.
Productions today are more credible. In October 2024 teams of Google DeepMind published an article on the Scientific Journal Nature , proposing the SynthID solution , (New Window).
It would then be possible to incorporate a mark, which is invisible to the human eye, in all of the productions that are derived from a model of artificial intelligence. Text, animations or images. It is possible to find this tattoo digital if there are any questions about its origin.
A marker that remains circumventable
There will always be techniques to remove or neutralize anti-theft devices that are placed on clothing in stores. When analyzing text, for example. It must resist deliberate changes, like the substitution of words, paraphrasing, or translations. The tests show that longer and more creative text performs better. It would not be as effective if the subject matter was presented in an extremely factual or synthetic manner.
It is clear that we should not rely solely on technology in order to identify problematic content. It’s our responsibility to be critical of the videos, images or texts that we see. Particularly if the productions have an emotional or political charge. To appreciate the reality and relevance of these productions, we must not give up our human intelligence.
Protective measures to be incorporated right from the design stage of artificial intelligence models
The original security consideration applies to any breach targeting Large Language Models. They are machine-learning mechanisms that can understand queries and generate content. In October 2024, two reports were published by the Laboratoire d’Innovation Numérique (LINC), the Commission Nationales Informatique et Libertes (CNIL), outlining some of the major risks that exist in the area.
There’s also the possibility of regurgitation. Interrogating an AI will result in it providing personal information from previous training sessions. The AI could have also been told to create a message inviting users to share confidential information. It is important to ensure that datasets are anonymized before they can be used for AI training.
It is also important to be able to predict toxic reactions. AI can be used to create violent or hateful content. Or to develop malicious computer programs.
Instating a culture of safety
The principal security recommendations are presented in a document (New Window),established by the Agence nationale de securite des systèmes d’information , (New Window), before summer 2024.
This reminds us that it is important to document all processes in developing artificial intelligence models, from the upstream quality of information to identifying the people who are authorized to make changes to data libraries, algorithmic rules and the storage of digital machines, especially in the cloud.
It is important that not only technicians, but also the public who will be increasingly handling these AI tools become aware of these principles. AI will play an increasingly important role in both our professional and personal lives.
ChatGPT Integration for Automated Processes