Media contact

Diane Nazaroff
UNSW Media & Content
+61 (2) 9385 2481, +61 (0)424 479 199

In 2013, a man in the US state of Wisconsin was arrested for attempting to flee a police officer and driving a car used in a recent shooting.

While none of his crimes mandated prison time, the judge said the man had a high risk of recidivism and sentenced him to six years in jail.

The judge had considered a report from a controversial computer program called COMPAS, a risk assessment tool developed by a private company.

The example is a reason why lawyers need to collaborate with computer scientists on using artificial intelligence (AI) in law, according to Mireille Hildebrandt, a Research Professor at Vrije Universiteit in Brussels.

“We need constructive distrust, rather than naïve trust in ‘legal tech’,” says Professor Hildebrandt, of Vrije's Faculty of Law and Criminology.

“This certainly involves reconsidering the use of potentially skewed discriminatory patterns, as with the COMPAS software that informs courts in the US when taking decisions on parole or sentencing.”

The Research Professor on Interfacing Law and Technology will discuss the impact of AI on law in her inaugural lecture for The Allens Hub for Technology, Law & Innovation on December 13.

COMPAS discrimination

She says researchers say there is evidence the COMPAS program is discriminating against black offenders.

“Though researchers agree that based on the data they are more likely to commit future crimes than white offenders, this caused a bug in the outcome regarding black offenders that do not recidivise,” Professor Hildebrandt says.

“They are attributed a higher reoffending rate than white offenders that never recidivise.”

“COMPAS has given rise to a new kind of discussion about bias in sentencing." - Mireille Hildebrandt

“COMPAS has given rise to a new kind of discussion about bias in sentencing, and once lawyers begin to engage in informed discussion about ‘fairness’ in ‘legal tech’ they may actually inspire more precise understandings of how fairness can be improved in the broader context of legal decision-making,” she says.

Professor Hildebrandt’s research interests concern the implications of automated decision, machine learning and mindless artificial agency for law and the Rule of Law in constitutional democracies.

Recently nominated as ‘one of 100 Brilliant Women in AI Ethics to follow in 2019 and beyond’ by Lighthouse3, she says AI tools “cannot be ‘made’ ethical or responsible by tweaking their code a bit”.

“Instead, we should focus on training lawyers in understanding the assumptions of ‘AI’, especially its dependence on mathematical mappings of legal decision-making, as this has all kinds of implications that are easily overlooked.”

New hermeneutics

Professor Hildebrandt says lawyers should develop ‘a new hermeneutics’, or a new art of interpretation, that includes a better understanding of what data-driven regulation or predictive technologies can and can’t do.

“This may, for instance, mean that lawyers sit down with data scientists to define ‘fairness’ in computational terms, to avoid discriminatory application of technical-decision support,” she says.

She proposes lawyers should ask three questions before introducing new technologies that will redefine their profession as well as the legal protection they offer: What problem does this technology solve, what problems are not solved, and what problems does it create?

“This requires research, domain expertise and talking to the people who may be affected: regulators, lawyers, but also and especially the ‘users’ of the legal system: citizens, consumers, suspects and defendants, the industry.”

Professor Hildebrandt also holds the Chair of Smart Environments, Data Protection and the Rule of Law at the Science Faculty, at the Institute for Computing and Information Sciences at Radboud University Nijmegen in the Netherlands.

She teaches law to computer scientists and will soon appoint a team of computer scientists and lawyers on a 2.5 million Euro grant from the European Research Council for research into legal tech.

She says computer-based predictions of legal judgments could help lawyers and those in need of legal advice decide whether to bring a case to court.

Argumentation mining

AI in the form of ‘argumentation mining’ could also help legal clerks quickly ingest relevant case law, statutes and even doctrine with regard to a specific case, while identifying potentially successful lines of argumentation.

“A concern could be that we engage ‘distant reading’ (reading texts via software) before being well versed in ‘close reading’, losing important lawyerly skills that define the mind of a good lawyer,” she says.

“Another concern is that legislatures may want to anticipate the algorithmic implementation of their statutes, writing them in a way easily translated into computer code.

“This may render such statutes less flexible, and thereby both over-inclusive and under-inclusive, or simply unfair and unreasonable.”

AI can improve compliance by possibly pre-empting people’s behaviour and reconfiguring their ‘choice architecture’, so they are nudged or forced into compliance.

“Sometimes that may be a good thing, as long as this is a decision by a democratic legislature, and as long as such choice architectures are sufficiently transparent and contestable,” she says.


Crime-mapping is another example of involving AI in the administration of justice through policing, but “this may displace the allocation of policing efforts to what the ‘tech’ believes to be the correct focus”.

“Crime-mapping depends on data, which may be skewed – and the most relevant data may actually be missing.

“Blind trust in such systems may undermine effective policing (as they may remain stuck in what the data allows them to see), it may also demotivate street-level policing, as officers may be forced to always check the databases and the algorithms instead of training their own intuition.”

Professor Hildebrandt says AI may contribute to proper compliance of data protection if done well.

“Or it may undermine the objectives of the law, by turning it into a set of check-boxes, where the real impact is circumvented by way of cleverly designed pseudo-compliance.”

Find out more about The Magic of Data Driven Regulation – An evening with Mireille Hildebrandt.