Algorithmic racialisation and possible implications for police investigations and prosecutions

jueces-juzgado-líbano

"No one is born hating others because of the colour of their skin, their background or religion. People learn to hate and, if they can learn to hate, they can also learn to love".1

Algorithms are discriminating against us, we need to recognise our rights so that when they are used in justice they put our biases aside.

Algorithms are now part of human decision-making processes. They are neither good nor bad, they exist and are applied to many areas of daily life such as consumption, service contracting, insurance, governmental procedures, labour relations or judicial decisions.

We humans are the ones who create them, use them and provide information according to our opinions, our biases and our opinions. After being generated, they can continue the learning process based on human behaviour, it is the closest thing to a predictive dictionary, this process is 'machine learning'.

One thing must be clear: technology must be anthropocentric and based on human rights, it must be at the service of man and not the other way around.

As a result of the pandemic, the process of modernising justice has accelerated. In this process of modernisation, which I agree is necessary and unavoidable for the realisation of human rights, it occurred hand in hand with the incorporation of technology.

In different parts of the world, within the framework of the judicial process, the citizens involved deserve that this process be carried out within a reasonable time and with all the guarantees of fair litigation.  In addition, the principles of effective judicial protection must be respected, while society has the right to access the information that is produced judicially, which should be public based on the obligation of the operators of the judicial system; we believe that technology has undoubtedly contributed to this whole process.

In terms of dispute resolution, the use of AI in justice in Argentina is incipient. Based on projects such as PROMETEA (2017), currently in use in the Autonomous City of Buenos Aires, it is expected that in the coming years new ways of applying AI to justice will become known and thus expand its spectrum.

In this case, PROMETEA is an AI system combined with machine learning that resolves simple cases, issuing uncomplicated rulings or resolutions that do not end any type of process, and is therefore used in the Public Prosecutor's Office (MPF) of that city.  For the moment, there is no danger of bias in this case, due to the way in which the tool is used, but what would happen if the use of AI in the justice system were to be extended by trying to emulate the decisions of a judge?

As an example in this regard, we can take the case of Brazil, where a judge in Minas Gerais ruled on 280 cases in just one second. Or just mention the Rey project, which is a method that uses AI to represent court cases on the judge's desk, which, in addition, can indicate to the judge how to decide in relation to his or her previous decisions in similar cases.

On the other hand, we cannot ignore current global cases of algorithmic discrimination, which discriminate, for example, in the opportunity to grant car insurance, to grant a job or to grant credit. There are cases of unlawful arrests based on algorithmic errors and cases where benefits have been refused at the enforcement stage of sentences, where differential treatment was given and complained about. Therefore, more than a potential risk of violation of constitutional and conventional rights to equal treatment and the prohibition of discrimination, in these cases we are talking about a tangible violation.

We must consider that the use of facial recognition systems as investigative techniques used in security cameras and drones for police investigation may be biased on this level. So my aim is to make that analysis and to anticipate a discussion that will inevitably come in the not so distant future and then begin to relate human rights to the use of algorithms.

The rights can be extracted from the global casuistry, taking into account the C.A.D.A. case. (2015) Commission of Access to Administrative Documents of France, the case State vs Loomis (2016), of a court in Wisconsin USA, the SyRI case (2020) of the Hague and the Deliveroo case (2020) Ordinary Court of Bologna; in terms of algorithms we have the right to access the algorithm which would imply knowing the source code in simple language, to understand what procedures it uses to make decisions, even up to the moment of its development, being able to summon the programmer to give explanations of the case.

On the other hand, we have the right not to be subjected to automated decisions, that it is the human being who makes the decisions and that the material collected by the algorithm is of support in the process of making those decisions.

Another significant right that emerges from these cases is the right to equality and algorithmic non-discrimination, given that the algorithm can discriminate and to defend us from this it must be audited to be able to know the decision-making procedures.

We are already on the road to a data-driven business model society. Yuval Harari speaks of the "science of dataism", Martin Hilbert of the "dictatorship of information", whichever term is used, it is information that governs and will govern our model of society for years to come.

Each person can unknowingly provide a total of approximately 5,000 data points, which means that today's computer giants or even governments know more about us than we do, and protecting us is and will continue to be a challenge.

Mariel Alejandra Suárez/ Lawyer, criminal judge and university lecturer/ Collaborator in the Criminology Area of Sec2Crime.

References:
  1. Nelson Mandela. De la autobiografía "El largo camino hacia la libertad", 1994