Web conference: How to ensure a fair and transparent use of AI technologies?

Also in... (Français)

  • 5 May, 14:00 – 15:00 CET

In previous sessions of our webconference series we discussed predictive policing and facial recognition. On the one hand, these artificial intelligence based technologies offer an array of opportunities in the domain of urban security. Facial recognition softwares can support the search for missing people, and the identification and tracking of criminals. Crime prediction softwares accelerate the processing and analysis of large amounts of data and can help guide security authorities in their daily operations.

On the other hand, we have to weigh the ethical, legal and social implications of the use of these surveillance technologies. Research led in the Cutting Crime Impact (CCI) project, found that, in terms of prevention, the results of predictive policing softwares are a matter of debate. Some argue that the approach could lead to decision-making processes that are free from human bias – as long as the data selection and quality is rigorous, otherwise biases can be reinforced¹. The question of representative databases is a key issue when it comes to Facial Recognition: Studies have found that the error rate varies depending on gender and skin colour². Other concerns are linked to data protection, the right to privacy and the right to free movement and association. In addition, cities have to think in terms of costs and benefits and evaluate if such tools have a real impact on the prevention of crime.

While a number of European cities experiment with predictive policing and facial recognition, other cities and regions use different applications of AI based technologies. The city of Amsterdam uses an algorithm to recognize keywords in complaints that residents submit to the municipality. In addition, the city is testing the use of cameras to monitor the respect of physical distancing rules in public spaces.

Whether for surveillance or for other safety and prevention purposes, how can cities ensure a fair and transparent use of artificial intelligence based technologies? In this session we explore this question by reiterating risks associated with the technologies and presenting existing safeguards and resources. We will discuss, among other things, these questions:

  • What opportunities and risks in using AI based technologies in smaller cities?
  • Could such technologies support decision making by allowing for a better insight into existing problems?
  • Are these technologies real tools for prevention at the local level?
  • Beyond predictive policing and artificial intelligence, what other use cases of AI based technologies exist?
  • How can we put safeguards into place to ensure a fair and transparent use of these technologies?

Speakers:

  • Jana Degrott is a councillor at the City of Steinsel (LU), co-founder of “We Belong” – a podcast and platform for people of color to share personal experiences – and a youth delegate to the Council of Europe.
  • Linda Van de Fliert works at the Chief Technology Office of the City of Amsterdam (NL). The CTO collaborates with other departments to make innovation happen in the city. They work on themes, such as e-health, circular economy, and smart mobility

>>> Register <<<


>>> More information on the web conference series

Contact:
>> Pauline Lesch, programme manager, lesch@efus.eu
>> Pilar de la Torre, programme manager, delatorre@efus.eu

—————————————————————————————————–

1. Cutting Crime Impact factsheets on the state of the art of predictive policing and its ethical, legal and social implications.

2. The Best Algorithms Struggle to Recognize Black Faces Equally, Tom Simonite. Wired, 2019.


2021-04-22

1619088205

21454