The use of artificial intelligence in security services, including police, gendarmerie and customs, is a growing trend. However, the choices that security agencies make are often not publicized.
Europol, the European Agency for Cooperation in Criminal Law Enforcement, is a name we should not forget. The Maastricht Treaty established Europol in 1992. This is not an European FBI but rather a platform for sharing information between members and with foreign counterparts, such as those in the USA, Canada, or Australia.
The main areas of application for AI in public safety ?
There are two main types: The first is the capability to analyse large amounts of data. To identify relationships between individuals or groups in a variety of documents: text, figures, pictures, etc.
It is possible, for instance, to identify unusual behaviors among the hundreds of thousands transactions that are recorded in bank documentation. This is a way to detect fraud, or fraud suspected. It is easier to identify fraud patterns or reconstruct financial transactions.
AI, on the other hand can be utilized to gather useful data from many different sources. If they are freely available, such as social networks. OSINT is also known as Open Source Intelligence HTML0 (and more specifically SOCMINT Social Media Intelligence). The information can also be extracted using digital recordings, such as sound or video recordings. These algorithms can be used for transcriptions and translations as well as to identify people who are suspicious or their behavior. For example, a person walking in a group that is convergent or collapsing.
AI is also useful for developing scenarios to train personnel.
Some uses of artificial intelligence already considered problematic
As with any artificial intelligence , we need to be aware of the risk of bias. Data used for AI training that is biased, such as including biases against people because of their gender or age would distort results. The algorithm may also lead to hallucinations when it proposes false solutions, which appear real.
Danger can also come from some uses of AI. For example, automated mass surveillance using biometric video surveillance systems that recognize faces instantly. Algorithms must also always be able to be explained. Transparency is required to make AI decisions transparent.
European regulation anticipates the possible excesses of AI
AI ACT (New Window) is a document that the European Union created to guide the application of AI throughout Europe. The AI ACT also includes the exploitation of it by security forces. The IA ACT was published on the Official Journal of July 12th, 2024 and entered into force August 1st, 2024. The Act will be in force from February 2025 to August 2027.
Start by describing the AI system that present “unacceptable risk”
What is the problem? We are talking about practices that would be contrary to fundamental rights and the values of European Union.
- As an example, in China social ratings are used to rate your daily actions. For instance, you can earn points for jaywalking or making offensive comments.
- Use of remote biometric authentication in public spaces by security services.
- Measures of predictive policing that target individuals.
- Recognition of emotions in educational and workplace settings.
The French Commission nationale de l’informatique et des libertés, the CNIL has released an in-depth analysis of the IA Act with its perspective as an administrative body independent of the government and responsible for the protection of personal data. The CNIL, the French independent administrative authority in charge of personal data protection has published an analysis of this IA Act.
Automate processes with Make.com integration