Abstract
Machine Learning has become a popular tool in a variety of applications in criminal justice, including sentencing and policing. Media has brought attention to the possibility of predictive policing systems causing disparate impacts and exacerbating social injustices. However, there is little academic research on the importance of fairness in machine learning applications in policing. Although prior research has shown that machine learning models can handle some tasks efficiently, they are susceptible to replicating systemic bias of previous human decision-makers. While there is much research on fair machine learning in general, there is a need to investigate fair machine learning techniques as they pertain to the predictive policing. Therefore, we evaluate the existing publications in the field of fairness in machine learning and predictive policing to arrive at a set of standards for fair predictive policing. We also review the evaluations of ML applications in the area of criminal justice and potential techniques to improve these technologies going forward. We urge that the growing literature on fairness in ML be brought into conversation with the legal and social science concerns being raised about predictive policing. Lastly, in any area, including predictive policing, the pros and cons of the technology need to be evaluated holistically to determine whether and how the technology should be used in policing.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Change history
28 August 2021
A Correction to this paper has been published: https://doi.org/10.1007/s10506-021-09299-z
References
Abdollahi B, Nasraoui O (2018) Transparency in fair machine learning: the case of explainable recommender systems. In: Human and Machine Learning. Springer, pp 21–35
Altman M, Wood A, Vayena E (2018) A harm-reduction framework for algorithmic fairness. IEEE Secur Privacy 16(3):34–45
Asaro PM (2019) Ai ethics in predictive policing: from models of threat to an ethics of care. IEEE Technol Soc Mag 38(2):40–53
Bakke E (2018) Predictive policing: the argument for public transparency. NYU Ann Surv Am L 74:131
Bellamy RK, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, Lohia P, Martino J, Mehta S, Mojsilovic A, et al. (2018) Ai fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv:181001943
Bennett Moses L, Chan J (2018) Algorithmic prediction in policing: assumptions, evaluation, and accountability. Policing Soc 28(7):806–822. https://doi.org/10.1080/10439463.2016.1253695
Benthall S, Haynes BD (2019) Racial categories in machine learning. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp 289–298
Berk R, Heidari H, Jabbari S, Kearns M, Roth A (2018) Fairness in criminal justice risk assessments: the state of the art. Sociol Methods Res 0049124118782533
Binns R (2018) What can political philosophy teach us about algorithmic fairness? IEEE Secur Privacy 16(3):73–80
Bird S, Dudík M, Edgar R, Horn B, Lutz R, Milan V, Sameki M, Wallach H, Walker K (2020) Fairlearn: a toolkit for assessing and improving fairness in ai. Tech. Rep. MSR-TR-2020-32, Microsoft, https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/
Bond-Graham D, Winston A (2013) All tomorrow’s crimes: the future of policing looks a lot like good branding. SF Weekly News https://archives.sfweekly.com/sanfrancisco/all-tomorrows-crimes-the-future-of-policing-looks-a-lot-like-good-branding/Content?oid=2827968&showFullText=true
Brantingham PJ, Valasik M, Mohler GO (2018) Does predictive policing lead to biased arrests? Results from a randomized controlled trial. Stat Public Policy 5(1):1–6
Calmon FP, Wei D, Ramamurthy KN, Varshney KR (2017) Optimized data pre-processing for discrimination prevention. arXiv:170403354
Campedelli GM (2019) Where are we? Using scopus to map the literature at the intersection between artificial intelligence and crime. arXiv:1912.11084
Chouldechova A (2016) Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. arXiv:1610.07524
Chouldechova A (2017) Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data 5(2):153–163
Corbett-Davies S, Goel S (2018) The measure and mismeasure of fairness: a critical review of fair machine learning. arXiv:180800023
Corbett-Davies S, Pierson E, Feller A, Goel S, Huq A (2017) Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pp 797–806
Degeling M, Berendt B (2018) What is wrong about robocops as consultants? A technology-centric critique of predictive policing. AI & Soc 33(3):347–356
Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R (2012) Fairness through awareness. In: Proceedings of the 3rd innovations in theoretical computer science conference, pp 214–226
Ensign D, Friedler SA, Neville S, Scheidegger C, Venkatasubramanian S (2017) Runaway feedback loops in predictive policing. arXiv:1706.09847
Ferguson AG (2016) Policing predictive policing. Wash UL Rev 94:1109
Friedler SA, Scheidegger C, Venkatasubramanian S, Choudhary S, Hamilton EP, Roth D (2019) A comparative study of fairness-enhancing interventions in machine learning. In: Proceedings of the conference on fairness, accountability, and transparency, pp 329–338
Garvie C (2016) The perpetual line-up: unregulated police face recognition in America. Georgetown Law, Center on Privacy & Technology
Grgic-Hlaca N, Zafar MB, Gummadi KP, Weller A (2016) The case for process fairness in learning: feature selection for fair decision making. In: NIPS symposium on machine learning and the law, vol 1, p 2
Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. In: Advances in neural information processing systems, pp 3315–3323
Heidari H, Loi M, Gummadi KP, Krause A (2019) A moral framework for understanding fair ml through economic models of equality of opportunity. In: Proceedings of the conference on fairness, accountability, and transparency, pp 181–190
Joh EE (2017) Artificial intelligence and policing: first questions. Seattle UL Rev 41:1139
Khademi A, Honavar V (2019) Algorithmic bias in recidivism prediction: a causal perspective. arXiv:1911.10640
Kusner MJ, Loftus J, Russell C, Silva R (2017) Counterfactual fairness. In: Advances in neural information processing systems, pp 4066–4076
Lohia PK, Ramamurthy KN, Bhide M, Saha D, Varshney KR, Puri R (2019) Bias mitigation post-processing for individual and group fairness. In: Icassp 2019–2019 ieee international conference on acoustics, speech and signal processing (icassp), IEEE, pp 2847–2851
Lum K, Isaac W (2016a) Predictive policing reinforces police bias. Human Rights Data Anal Group
Lum K, Isaac W (2016b) To predict and serve? Significance 13(5):14–19
Marda V, Narayan S (2020) Data in new delhi’s predictive policing system. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 317–324
Martinez N, Bertran M, Sapiro G (2019) Fairness with minimal harm: a pareto-optimal approach for healthcare. arXiv:191106935
Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A (2019) A survey on bias and fairness in machine learning. arXiv:190809635
Mohler GO, Short MB, Malinowski S, Johnson M, Tita GE, Bertozzi AL, Brantingham PJ (2015) Randomized controlled field trials of predictive policing. J Am Stat Assoc 110(512):1399–1411
Nissan E (2017) Digital technologies and artificial intelligence’s present and foreseeable impact on lawyering, judging, policing and law enforcement. Ai & Soc 32(3):441–464
Perrot P (2017) What about ai in criminal intelligence? From predictive policing to ai perspectives. Eur Law Enforc Res Bull 16:65–75
Perry W, McInnis B, Price C, Smith S, Hollywood J (2018) Predictive Policing: the role of crime forecasting in law enforcement operations. RAND Corporation, Tech. rep
Perry WL (2013) Predictive policing: the role of crime forecasting in law enforcement operations. Rand Corporation, Santa Monica
Persson A, Kavathatzopoulos I (2018a) How to make decisions with algorithms. ACM SIGCAS Comput Soc 47(4):122–133
Persson A, Kavathatzopoulos I (2018b) How to make decisions with algorithms: ethical decision-making using algorithms within predictive analytics. ACM SIGCAS Comput Soc 47(4):122–133
Reisman D, Schultz J, Crawford K, Whittaker M (2018) Algorithmic impact assessments: a practical framework for public agency accountability. Tech. rep., AI Now Institute
Richardson R, Schultz J, Crawford K (2019) Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review Online, Forthcoming
Ridgeway G (2013) The pitfalls of preduction. NIJ J 271:34–40
Robertson K, Khoo C, Song Y (2020) To surveil and predict: a human rights analysis of algorithmic policing in Canada. https://ihrp.law.utoronto.ca/
Saleiro P, Kuester B, Hinkson L, London J, Stevens A, Anisfeld A, Rodolfa KT, Ghani R (2018) Aequitas: a bias and fairness audit toolkit. arXiv:181105577
Santos RB (2019) Predictive policing: where’s the evidence? In: Police innovation: contrasting perspectives. Cambridge University Press, p 366
Scantamburlo T, Charlesworth A, Cristianini N (2018) Machine decisions and human consequences. arXiv:181106747
Selbst AD (2017) Disparate impact in big data policing. Ga L Rev 52:109
Shrestha YR, Yang Y (2019) Fairness in algorithmic decision-making: applications in multi-winner voting, machine learning, and recommender systems. Algorithms 12(9):199
Speicher T, Heidari H, Grgic-Hlaca N, Gummadi KP, Singla A, Weller A, Zafar MB (2018) A unified approach to quantifying algorithmic unfairness: measuring individual&group unfairness via inequality indices. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp 2239–2248
Verma S, Rubin J (2018) Fairness definitions explained. In: 2018 IEEE/ACM international workshop on software fairness (FairWare), IEEE, pp 1–7
Vestby A, Vestby J (2019) Machine learning and the police: asking the right questions. Policing J Policy Pract
Wang H, Grgic-Hlaca N, Lahoti P, Gummadi KP, Weller A (2019) An empirical study on learning fairness metrics for compas data with human supervision. arXiv:1910.10255
Wexler J, Pushkarna M, Bolukbasi T, Wattenberg M, Viégas F, Wilson J (2019) The what-if tool: interactive probing of machine learning models. IEEE Trans Visual Comput Graph 26(1):56–65
Xiang A, Raji ID (2019) On the legal compatibility of fairness definitions. arXiv:191200761
Funding
This material is based upon work supported by the National Science Foundation under Grant No. 1917712.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Alikhademi, K., Drobina, E., Prioleau, D. et al. A review of predictive policing from the perspective of fairness. Artif Intell Law 30, 1–17 (2022). https://doi.org/10.1007/s10506-021-09286-4
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10506-021-09286-4