AI helps police to predict crimes and reveal false claims

This is a guest contribution. This Website may or may not agree with the views/opinions expressed in it.

Fighting crime while enduring budget cuts isn’t how most police officers imagined their job when they were signing up for it. Their salaries are often low even in developed countries such as the US and Germany. Also, the number of police officers stagnates, and in the UK, it even dropped by around 20,000 in the last eight years alone. Yet, there’s no lack of crimes to solve. According to an FBI report, there were 17,284 murders across the US in 2017, while Brazil broke its own record as the number of murders last year hit 63,880. And the nature of crime is changing, too, as criminals turn to the internet to commit sophisticated crimes such as identity theft, cyber-attacks, and financial fraud.

Clearly, police needs to find a way to efficiently use its limited resources to solve and even prevent crimes. One way to go about that is to rely on artificial intelligence (AI) tools that use past data to predict where crime might occur and who might commit it – think Minority Report, but not as advanced. This would allow the police to act where it’s needed the most and not waste precious resources. But this approach is more complicated than it sounds, and predictive policing algorithms such as PredPol and the National Data Analytics Solution (NDAS) were met with both praise and criticism. Police chiefs praise the software’s efficiency, while analysts point out that this could put even more pressure on already heavily-policed neighborhoods and groups. Only AI tools such as VeriPol that discover false robbery reports managed to avoid harsh criticism.

Being one step ahead of criminals

A big screen that displays a city map and red boxes showing areas where the next crime might occur isn’t a scene from a Hollywood blockbuster. Rather, that’s how more than 50 police departments in the US operate after they’ve bought PredPol, an AI algorithm that attempts to predict where crime might happen. The software helps law enforcement agencies by analyzing and finding patterns in historical data such as crime type, crime location, and crime date and time, and pinpointing areas as small as 500×500 feet that are the most crime-prone. Police officers can then spend more time patrolling those neighborhoods and potentially preventing someone from breaking the law.

fighting crime with AI

Photo by https://www.shutterstock.com/g/burdun+iliya

Across the Atlantic, British police is developing an even more ambitious algorithm, called the National Data Analytics Solution (NDAS), which is supposed to identify individuals most likely to commit a crime. These persons won’t be arrested just because the software flagged them, but they’ll be offered counselling and other types of social services designed to prevent them from hurting someone or stealing something. NDAS was trained with data such as arrest records, social media data, and social connections. The algorithm identified around 1,400 indicators that help predict who might commit a crime, while 30 of those are particularly insightful, such as “the number of crimes an individual had committed with the help of others and the number of crimes committed by people in that individual’s social group.”

However, ‘street crime’ is just one small part of what police have to deal with as white-collar crime committed by wealthy individuals remains a pervasive, albeit hidden threat. The White Collar Crime Early Warning System (WCCEWS) is AI software that sheds some light on that issue by showing high-risk zones in which financial crime is most likely to occur. The algorithm was trained with the locations of incidents of financial crimes in the US and other indicators such as the locations of investment advisors, the geographical distribution of liquor licenses, and the density of tax-exempt organizations. Unsurprisingly, areas such as downtown Manhattan, where some of the biggest financial institutions in the US are located, is shown as a high-risk area for financial crimes. And although this software might have limited application in the day-to-day investigations of law enforcement agencies, it’s surely a sign of things to come.

Predicting when people will lie

Meanwhile, VeriPol software did make a difference in the daily operations of the Spanish police. This algorithm, developed by researchers at Cardiff University and the Charles III University of Madrid, uses text analysis and machine learning to discover fake robbery reports. It’s been trained with over 1,000 false claims and can find patterns in reports by analyzing the occurrence of certain adjectives, verbs, nouns, and various other parts of a sentence. For instance, references to iPhone and Samsung were correlated with false claims, while bicycles and necklaces usually indicated true reports. The AI also discovered that false robbery claims tend to be shorter statements that focus on stolen property, lack witnesses and evidence, and don’t offer precise details about the attacker. VeriPol was tested in Murcia, where it discovered 25 false claims in just one week, while the previous average monthly rate of false report detection in this Spanish city was 3.33. In the hope that it will deter people from making false claims, the software is now being integrated into police departments across Spain, helping officers to close cases faster and stop investigating made-up events.

Biased data leads to biased decisions

But not everyone is impressed with the promise of AI to predict crime. For instance, experts such as John Hollywood, an analyst at the policy research institution Rand Corporation, claims that new crime prediction technologies could “help improve deployment decisions, but [they’re] far from the popular hype of a computer telling officers where they can go to pick up criminals in the act.” At the same time, Steve Clark, the deputy chief of the Santa Cruz Police Department in California, argues that “We found that the model was just incredibly accurate at predicting the times and locations where these crimes were likely to occur.”

Then there’s the issue of AI algorithms relying on data that was gathered through past policing strategies “that over-criminalize certain neighbourhoods”. AI could reinforce that bias and put even more pressure and heavy policing on certain areas and ethnic, religious, and racial groups. The situation is even more complicated with NDAS. Should the public intervene in the life of individuals and treat them for crimes they didn’t even commit? After all, would those individuals even commit a crime if the police didn’t act? And much like PredPol, NDAS could also reinforce bias towards certain groups and neighborhoods.

One way to start solving these challenges is by showing the public how data is gathered and used in predictive policing software. Also, Nikita Malik, the director of the Centre on Radicalisation & Terrorism at the Henry Jackson Society, a non-profit research organization, argues that there’s a need to “create an international commission on the regulation of AI when it comes to crime.” Such a body would set standards in this field and find ways to reduce the number of biased decisions made by AI.

The rise of a long-term trend

Crime rarely happens at random, and there are patterns in what criminals do that AI can recognize even if humans miss them. Police can use this information to act preemptively, but despite big promises, predictive policing is still a far cry from being a tool that will help law enforcement agencies to overcome staff shortages and budget cuts. Nevertheless, we’re witnessing the rise of a trend that could make our cities safer if issues such as biased data and limited efficiency are resolved.


Richard van HooijdonkAbout Author: Richard van Hooijdonk

International keynote speaker, trendwatcher and futurist Richard van Hooijdonk offers inspiring lectures on how technology impacts the way we live, work and do business. Over 420,000 people have already attended his renowned inspiration sessions, in the Netherlands as well as abroad. He works together with RTL television and presents the weekly radio program ‘Mindshift’ on BNR news radio. Van Hooijdonk is also a guest lecturer at Nyenrode and Erasmus Universities.


You may also want to read: Can AI understand morality and ethics?

Sources:

Baraniuk, Chris, https://www.newscientist.com/article/2186512-exclusive-uk-police-wants-ai-to-stop-violent-crime-before-it-happens/

Clifton, Brian, https://www.researchgate.net/publication/316505545_Predicting_Financial_Crime_Augmenting_the_Predictive_Policing_Arsenal

Dodd, Vikram, https://www.theguardian.com/uk-news/2018/apr/08/police-cuts-likely-contributed-to-rise-in-violent-leaked-report-reveals

Embury-Dennis, Tom, https://www.independent.co.uk/news/world/americas/brazil-murder-rate-record-homicides-killings-rio-de-janeiro-police-a8485656.html

Kaste, Martin, https://www.npr.org/2018/12/12/675359781/americas-growing-cop-shortage

Kern, Vera, https://www.dw.com/en/amid-controversy-germanys-police-struggle-to-find-recruits/a-41306017

Malik, Nikita, https://www.forbes.com/sites/nikitamalik/2018/10/29/the-problems-with-using-artificial-intelligence-and-facial-recognition-in-policing/#23a00f1f4f83

https://www.predpol.com/law-enforcement/#predPolicing

Smith, Patrick, https://www.npr.org/2018/01/22/579778555/what-happens-when-suburban-police-departments-dont-have-enough-money

Thorp, Adam, https://chicago.suntimes.com/news/chicago-murder-rate-national-statistics-fbi-report/

Vaas, Lisa, https://nakedsecurity.sophos.com/2018/11/01/robocops-ai-on-the-rise-in-policing-to-predict-crime-and-uncover-lies/

https://www.richardvanhooijdonk.com/en/blog/cybercrime-may-be-the-biggest-global-threat-of-2018/

Featured image by Gerd Altmann from Pixabay


 

Leave a Reply

Click here to opt out of Google Analytics