Notification texts go here Contact Us Download Now!
Posts

The Artificial Intellignce (AI) - The AI Act

Please wait 0 seconds...
Scroll Down and click on Go to Link for destination
Congrats! Link is Generated

 

Artificial intelligence (AI), data and criminal justice

Law enforcement and criminal justice authorities are increasingly using artificial intelligence (AI) and automated decision-making (ADM) systems. These systems are often used to profile people, ‘predict’ their actions, and assess their risk of certain behaviour, such as committing a crime, in the future. This can have devastating consequences for the people involved, who are profiled as criminals or considered a risk even though they haven’t actually committed a crime.

Predictive policing is no longer confined to the realms of science fiction. These systems are being used by law enforcement around the world. And predictions, profiles, and risk assessments that are based on data analysis, algorithms and AI, often lead to real criminal justice outcomes. These can include constant surveillance, repeated stop and searches, questioning, fines and arrests. These systems can also heavily influence sentencing, prosecution and probation decisions.

Predictive Policing

Law enforcement agencies are increasingly using artificial intelligence, algorithms and big data to profile people and ‘predict’ whether they are likely to commit a crime. Predictive policing has been proven time and time again to reinforce discrimination and undermine fundamental rights, including the right to a fair trial and the presumption of innocence. This results in Black people, Roma, and other minoritised ethnic people being overpoliced and disproportionately detained and imprisoned across Europe.

For example, Delia, a predictive system in Italy, uses ethnicity data to profile and ‘predict’ future criminality. In the Netherlands, the Top 600 list attempts to ‘predict’ which young people will commit high-impact crime. One in three of the ‘Top 600’ – many of whom have reported being followed and harassed by police –  are of Moroccan descent.

Only an outright ban can stop this injustice. We have been campaigning for a prohibition in the EU’s Artificial Intelligence Act (AI Act). This ban must cover predictive policing of both individuals and places, as both methods are equally harmful.

Our AI campaign in Europe

The EU is in the process of regulating the use of AI through the AI Act but the proposed legislation does not go far enough. As more and more countries turn to AI and ADM in criminal justice, it is crucial that the EU becomes a leading standard-setter.

The EU must ban the use of predictive, profiling and risk assessment AI and ADM systems in law enforcement and criminal justice. Strict safeguards must be introduced for all other uses.

Thanks to our campaign, EU political leaders are starting to recognise the harms caused by these systems and support our call for them to be banned. The two committees in charge of the AI Act on behalf of the European Parliament have come out in favour of a prohibition of predictive policing against individuals – but not areas and locations. Read our response to their report.

In our report, Automating Injustice, we document and analyse how this technology is already being used across Europe and expose its harmful consequences.

Council of Europe

The Council of Europe is working on a new legal framework for the use of AI, with a view to producing a legally binding convention in future.

The committee working on the framework have our Automating Injustice  report. We will continue to follow the development of the framework and will be advocating to ensure that the framework protects people and their fundamental rights.

Will predictive systems profile you as a criminal?

Police forces and criminal justice authorities across Europe are using data, algorithms and artificial intelligence (AI) to ‘predict’ if certain people are at ‘risk’ of committing crime in the future. We have created a tool to show that these outcomes are unjust and discriminatory. Find out if you could be profiled as at risk of committing a crime. Take the quiz now

Learn more

Why we need to ban the use of AI to profile people

Read more about our work on AI and criminal justice in  ReutersNew ScientistEU ObserverComputer WeeklyLive Mint and TechCrunch.

What are the problems with AI?

There are fundamental flaws in how AI and automated systems are being implemented in criminal justice:

Discrimination and bias: AI and automated systems in criminal justice are designed, created and operated in a way that makes them predisposed to produce biased outcomes. This can stem from their purpose, such as targeting a certain type of crime or a specific area. It can also be a result of their use of biased data, which reflects structural inequalities in society and institutional biases in criminal justice and policing. As a result, these systems can reproduce and exacerbate discrimination based on race, ethnicity, nationality, socio-economic status and other grounds. These systemic, institutional and societal biases are so ingrained that it is questionable whether any AI or ADM system would produce unbiased outcomes.

Infringement of the presumption of innocence: Profiling people and taking action before a crime has been committed undermines the right to be presumed innocent until proven guilty in criminal proceedings. Often, these profiles and decisions are based not just on an individual’s behaviour but on factors far beyond their control. This may include the actions of people they are in contact with or even demographic information, such as data about the neighbourhood they live in.

Lack of transparency and routes for redress: Any system that has an influence on criminal justice decisions should be open to public scrutiny. However, technological barriers and deliberate efforts to conceal how the systems work for profit-driven reasons make it difficult to understand how such decisions are made. People are often unaware that they have been subject to an automated decision. Clear routes for challenging decisions – or the systems themselves – and for redress are also severely lacking.

These are serious issues that can seriously impact people’s lives, threaten equality and infringe fundamental rights, including the right to a fair trial.

What should states do?

States should implement regulation that ensures AI and ADM systems in criminal justice do not cause fundamental harms. Here is what states can do:

Profiling: Prohibit the use of predictive, profiling and risk assessment AI and ADM systems in law enforcement and criminal justice. Only an outright ban can protect people from the fundamental harms they cause. For other uses of AI and ADM in criminal justice, we want States to implement a set of strict legal safeguards:

Bias testing: Independent testing for biases at all stages of deployment, including the design and deployment phases, must be mandatory. To make such bias testing possible, data collection on criminal justice must be improved, including data separated by race, ethnicity, and nationality.

Transparency: It must be made clear how a system works, how it is operated, and how it has arrived at a decision. Everyone affected by these systems and their outputs, such as suspects and defendants, must be able to understand how they work, as well as the general public.

Evidence of decisions: Human decision-makers in criminal justice must provide reasons for their decisions and evidence of how decisions were influenced by AI and ADM systems.

Accountability: A person must be told whenever an AI or ADM system has or may have impacted a criminal justice decision related to them. There must also be clear procedures for people to challenge AI and ADM decisions, or the systems themselves, and routes for redress.

Post a Comment

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.