The ability to predict when, where and by whom crime will be committed sounds a lot like the movie Minority Report. But for law enforcement agencies around the world, the concept represents the evolution of policing practice, with many already implementing software to help forecast future criminal activity. 

While the technology might not be accurate enough to charge someone for a future crime, predictive policing does “take things to the next level”, says UNSW Law Professor Lyria Bennett Moses, Director of the Allens Hub for Technology, Law and Innovation. 

“The main distinction that makes predictive policing standout is that it isn’t intuition anymore. It’s got deliberate modelling that pushes historic data into the future and attempts to predict it, and makes it look, in a sense, quantifiable … saying this is what’s likely to happen.” 

The software can range from excel spreadsheets to social media analysis, to sophisticated artificial intelligence including machine learning to identify crime hotspots. 

“Looking at crimes like burglary, one can create quite a useful predictive model because some areas have higher rates of burglary than others and there are patterns; [however] it works really badly for kidnapping, or domestic violence, in the latter case because so much of the crime is unreported,” she says. 

Besides the promise of forecasting crime, Professor Bennett Moses says there are many reasons predictive policing software is an attractive proposition to police management. 

“One is efficiency; they might have fewer resources, fewer cops and this helps them narrow down where they should police,” she says. “It can save on having your own intelligence experts in-house because a lot of the software purports to do that for you.”

Additionally, the idea that “we’re going to be preventative rather than responsive, [that] we’re going to get ahead of crime … can make it quite attractive to police management,” she says.

Lack of demonstrated effectiveness

Professor Bennett Moses says that while she expects further adoption of such technology, it is not likely to be well implemented or to successfully reduce crime. 

“What you do have is predictive models making probabilistic predictions, which can be useful, but that’s not the sale pitch,” she says. “There is a lack of demonstrated effectiveness … that when put into practice, in a real-world police department, that there was any real impact on crime.” 

“But what could happen is that police are so anxious about a crime that’s about to happen in that spot that they stop and frisk everybody who’s walking through the ‘crime likely to happen here’ intersection,” she says. 

While most software simply identifies crime geographically, some offer a more targeted approach and identify at-risk individuals by creating profiles from historical data, which Professor Bennett Moses says can be problematic. 

“[It can] look an awful lot like having a list of usual suspects … and we can see what happens when people think the police are picking on them, a particular subpopulation, as opposed to other populations.”

“There is also risk it will ‘feed’ that feeling of being targeted, which could have a negative impact on law enforcement overall if people feel that they can’t trust the police.”

Reinforcing historical bias

Another issue lies in feedback loops created from bias in historical data that risks further oppressing rather than protecting. 

“Using a US example, if you go to a police database in Ferguson, you’re going to see a lot more offences being committed by people in African American neighbourhoods because more of the crime that happens in those places is policed.

“If you go to police databases in Australia and look at offensive language crimes, it looks like it is only Indigenous people who swear because there isn’t anyone else who gets charged for it.”

“So you have a bias there to start within the data, and any predictive system is going to be based on historical data, and then that feeds back into the system.”

Because most of the software used for predictive policing is proprietary, it is hard for both the public and the police to understand how they really work, she says. 

“It goes back to any automated system – it should be transparent, it should be fair, it should be accountable, it should be evaluated and tested, and the predictive policing software industry should be doing all of those things, but most of them are not doing any of them.” 

Professor Bennett Moses says there also needs to be significant improvements in oversight for law enforcement agencies who might trial or use predictive policing software in Australia. 

“There’s nothing specific in the law that says the police can use software to make predictions, but there’s also no law saying they can’t. The idea of a program running in the background which takes in diverse data on us … the rules on data sharing are jurisdiction by jurisdiction, and some don’t even have proper privacy legislation.” 

“So while there is a lot of mystique around it… I don’t think it’s understood as a fully implemented system,” she says. “At worst, you have the capacity to create more problems.” 


Ben Knight