The PRACTICIES project’s first toolbox webinar was a presentation of the “extremist content analysis tool” developed for the project by the Galicia (Spain)-based company Gradiant, specialised in communication technology. The presentation was made by Adrián Abalde and Joaquín Lago from Gradiant.
> Cyberspace: an essential channel for the spread of extremist content
Cyberspace has become an essential communication channel for the spread of extremist content. The application developed by Gradiant for the PRACTICIES project enables law enforcement agencies (LEAs) and other security and intelligence agencies to crawl through vast amounts of online data in order to detect suspicious or violent content. It is to be noted that for security and ethical reasons, this software is strictly reserved for these agencies and not available to the general public.
> Technologies to identify signs of radicalisation in social media
In order to identify signs of radicalisation in social media, Gradiant provides:
- text mining and natural language processing (NLP) technologies that structure the human discourse in a way that the computer can understand it. They offer a series of radicalisation indicators;
- computer vision technologies that automatically analyse images and detect signs and behaviours that signal possible radical situations.
> How does it work?
Let’s imagine a police officer in charge of monitoring the web for suspicious contents. She finds a suspicious website or forum. All she has to do is copy the domain and submit it to the application, which analyses three different aspects: texts, images, and user behaviour.
To do so, the tool consists of two modules: the capture module, which was developed by the University of Piraeus Research Centre (GR), and the Radical Content Analysis Module, developed by Gradiant, which analyses the information obtained with the capture tool.
> Suspicious text and image contents are flagged
The police officer has copied a domain name into the application. In a few seconds, the app publishes a list of contents that are either flagged as suspicious, with an amber warning sign, or given the green light, with a green, ticked box logo, because it is harmless. She can also use a filter tool to examine the findings through three different categories: by task, domain, or status. For each item found, there is an image analysis and/or a text analysis.
> Identifying “relevant terms” or images and suspicious behaviours
In the category of text analysis, the app will flag for example “aggressive speech” or “suspicious language” and list all the “relevant terms” identified (e.g. “Muslims”, “terrorism”, “hate”, “kill”…). The app also identifies the authors who wrote the aggressive or suspicious posts, so that it is also possible to go through all their previous posts.
This analysis is based on three databases developed in collaboration with the University of Toulouse-Jean Jaurès, coordinator of the PRACTICIES project, and the consulting firm Bouzar Expertise, which is specialised in Islam and radicalisation. These databases concern respectively suspicious language, aggressive speech, and radical discourse.
Apart from identifying suspicious words, the app also analyses images by looking for contents that are violent (e.g. clashes in a street, or a burning car).
Both the text and image analysis can detect not only religious radical content but also any kind of violent or extremist content, for example from football hooligans or right-wing extremists.
Another aspect of the tool is that it detects suspicious behaviours among users, such as increased activity in a specific period, or a great number of posts on one topic in particular. The tool can analyse past behaviours up to several years before.
The tool developed by Gradiant was tested through the PRACTICIES project with law enforcement agencies, and the feedback was very positive: they found it useful for their daily activities.
>>> More information on the PRACTICIES project
>>> More information about the PRACTICIES webinars