Zusammenfassung
Systeme, die biometrische Technologien verwenden, sind in persönlichen, kommerziellen und staatlichen Identitätsmanagementanwendungen allgegenwärtig geworden. Sowohl kooperative (z.B. Zugangskontrolle) als auch nicht-kooperative (z.B. Überwachung und Forensik) Systeme haben von der Biometrie profitiert. Solche Systeme beruhen auf der Einzigartigkeit bestimmter biologischer oder verhaltensbezogener Merkmale des Menschen, welche eine zuverlässige Erkennung von Personen durch automatisierte Algorithmen ermöglichen. In jüngster Zeit wurden jedoch in der Öffentlichkeit und in der Wissenschaft Bedenken laut, dass automatisierte Entscheidungssysteme (einschließlich biometrischer Systeme) systematischen Bias aufweisen. Vor allem Algorithmen zur Gesichtserkennung wurden daher von Medien, Nichtregierungsorganisationen und Forschenden teilweise als ,,rassistisch‘‘ oder ,,voreingenommen‘‘ bezeichnet.
Literatur
Osoba, Osonde A., and William Welser IV. An Intelligence in our Image: The Risks of Bias and Errors in Artificial Intelligence. Rand Corporation, 2017.
Du, Mengnan, Fan Yang, Na Zou, and Xia Hu. “Fairness in Deep Learning: A Computational Perspective.” IEEE Intelligent Systems 36, no. 4 (2020): 25-34.
Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. “A Survey on Bias and Fairness in Machine Learning.” ACM Computing Surveys (CSUR) 54, no. 6 (2021): 1-35.
Pessach, Dana, and Erez Shmueli. “A Review on Fairness in Machine Learning.” ACM Computing Surveys (CSUR) 55, no. 3 (2022): 1-44.
Washington, Anne L. “How to Argue with an Algorithm: Lessons from the COMPAS-ProPublica Debate.” Colo. Tech. LJ 17 (2018): 131.
Yu, Kun-Hsing, and Isaac S. Kohane. “Framing the Challenges of Artificial Intelligence in Medicine.” BMJ Quality & Safety 28, no. 3 (2019): 238-241.
Hurley, Mikella, and Julius Adebayo. “Credit scoring in the era of big data.” Yale JL & Tech. 18 (2016): 148.
Castelluccia, Claude, and Daniel Le Métayer. “Understanding Algorithmic Decision-making: Opportunities and Challenges.” European Parliament, Directorate-General for Parliamentary Research Services (2019).
Segal, Shahar, Yossi Adi, Benny Pinkas, Carsten Baum, Chaya Ganesh, and Joseph Keshet. “Fairness in the Eyes of the Data: Certifying Machine-learning Models.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 926-935. 2021.
Park, Saerom, Seongmin Kim, and Yeon-sup Lim. “Fairness Audit of Machine Learning Models with Confidential Computing.” In Proceedings of the ACM Web Conference 2022, pp. 3488-3499. 2022.
Agarwal, Avinash, Harsh Agarwal, and Nihaarika Agarwal. “Fairness Score and Process Standardization: Framework for Fairness Certification in Artificial Intelligence systems.” AI and Ethics (2022): 1-13.
O’Toole, Alice J., P. Jonathon Phillips, Fang Jiang, Janet Ayyad, Nils Penard, and Herve Abdi. “Face Recognition Algorithms Surpass Humans Matching Faces over Changes in Illumination.” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 29, no. 9 (2007): 1642-1646.
O’Toole, Alice J., P. Jonathon Phillips, and Abhijit Narvekar. “Humans versus Algorithms: Comparisons from the Face Recognition Vendor Test 2006.” In IEEE International Conference on Automatic Face & Gesture Recognition, pp. 1-6. IEEE, 2008.
Angwin, J., J. Larson, S. Mattu, and L. Kirchner. “Machine Bias: There’s Software used Across the Country to Predict Future Criminals and it’s Biased against Blacks.” PROPUBLICA 2016.
Garvie, C., A. M. Bedoya, and J. Frankle. “The Perpetual Line-up. Unregulated Police Face Recognition in America. Georgetown Law Center on Privacy & Technology.” (2019).
Abdurrahim, Salem Hamed, Salina Abdul Samad, and Aqilah Baseri Huddin. “Review on the Effects of Age, Gender, and Race Demographics on Automatic Face Recognition.” The Visual Computer 34, no. 11 (2018): 1617-1630.
Drozdowski, Pawel, Christian Rathgeb, Antitza Dantcheva, Naser Damer, and Christoph Busch. “Demographic Bias in Biometrics: A Survey on an Emerging Challenge.” IEEE Transactions on Technology and Society (TTS) 1, no. 2 (2020): 89-103.
Terhörst, Philipp, Jan Niklas Kolf, Marco Huber, Florian Kirchbuchner, Naser Damer, Aythami Morales Moreno, Julian Fierrez, and Arjan Kuijper. “A Comprehensive Study on Face Recognition Biases beyond Demographics.” IEEE Transactions on Technology and Society (TTS) 3, no. 1 (2021): 16-30.
Ross, Arun, Sudipta Banerjee, Cunjian Chen, Anurag Chowdhury, Vahid Mirjalili, Renu Sharma, Thomas Swearingen, and Shivangi Yadav. “Some Research Problems in Biometrics: The future Beckons.” In International Conference on Biometrics (ICB), pp. 1-8. IEEE, 2019.
Rathgeb, Christian, Pawel Drozdowski, Naser Damer, Dinusha C. Frings, and Christoph Busch. “Demographic Fairness in Biometric Systems: What do the Experts say?.” arXiv preprint arXiv:2105.14844 (2021).
Howard, John J., Yevgeniy B. Sirotin, and Arun R. Vemury. “The Effect of Broad and Specific Demographic Homogeneity on the Imposter Distributions and False Match Rates in Face Recognition Algorithm Performance.” In IEEE International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1-8. IEEE, 2019.
Doddington, George, Walter Liggett, Alvin Martin, Mark Przybocki, and Douglas Reynolds. Sheep, Goats, Lambs and Wolves: A Statistical Analysis of Speaker Performance in the NIST 1998 Speaker Recognition Evaluation. National Institute of Standards and Technology Gaithersburg Md, 1998.
Yager, Neil, and Ted Dunstone. “Worms, Chameleons, Phantoms and Doves: New Additions to the Biometric Menagerie.” In IEEE Workshop on Automatic Identification Advanced Technologies, pp. 1-6. IEEE, 2007.
Grother, Patrick, Mei Ngan, and Kayee Hanaoka. “Ongoing Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects.” National Institute of Standards and Technology, Gaithersburg, MA, USA, Rep. NISTIR 8280 (2019).
Grother, Patrick. “Face Recognition Vendor Test (FRVT) Part 8: Summarizing Demographic Differentials.” (2022).
Information Technology – Biometrics – Identifying and Mitigating the Differential Impact of Demographic Factors in Biometric Systems, ISO/IEC Standard WD TR 22116.
Phillips, P. Jonathon, Fang Jiang, Abhijit Narvekar, Julianne Ayyad, and Alice J. O’Toole. “An Other-Race Effect for Face Recognition Algorithms.” ACM Transactions on Applied Perception (TAP) 8, no. 2 (2011): 1-11.
Klare, Brendan F., Mark J. Burge, Joshua C. Klontz, Richard W. Vorder Bruegge, and Anil K. Jain. “Face Recognition Performance: Role of Demographic Information.” IEEE Transactions on Information Forensics and Security (TIFS) 7, no. 6 (2012): 1789-1801.
Albiero, Vitor, Krishnapriya KS, Kushal Vangara, Kai Zhang, Michael C. King, and Kevin W. Bowyer. “Analysis of Gender Inequality in Face Recognition Accuracy.” In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, pp. 81-89. 2020.
Serna, Ignacio, Aythami Morales, Julian Fierrez, Manuel Cebrian, Nick Obradovich, and Iyad Rahwan. “Algorithmic Discrimination: Formulation and Exploration in Deep Learning-based Face Biometrics.” arXiv preprint arXiv:1912.01842 (2019).
Sixta, Tomáš, Julio Jacques Junior, Pau Buch-Cardona, Eduard Vazquez, and Sergio Escalera. “FairFace Challenge at ECCV 2020: Analyzing Bias in Face Recognition.” In European Conference on Computer Vision, pp. 463-481. Springer, Cham, 2020.
de Freitas Pereira, Tiago, and Sébastien Marcel. “Fairness in Biometrics: A Figure of Merit to Assess Biometric Verification Systems.” IEEE Transactions on Biometrics, Behavior, and Identity Science (TBIOM) 4, no. 1 (2021): 19-29.
Howard, John J., Eli J. Laird, Yevgeniy B. Sirotin, Rebecca E. Rubin, Jerry L. Tipton, and Arun R. Vemury. “Evaluating Proposed Fairness Models for Face Recognition Algorithms.” arXiv preprint arXiv:2203.05051 (2022).
Drozdowski, Pawel, Christian Rathgeb, and Christoph Busch. “The Watchlist Imbalance Effect in Biometric Face Identification: Comparing Theoretical Estimates and Empiric Measurements.” In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3757-3765. 2021.
Sirotin, Yevgeniy, and Arun Vemury. “Demographic Variation in the Performance of Biometric Systems: Insights Gained from Large-scale Scenario Testing.” EAB Virtual Events Series on Demographic Fairness in Biometric Systems. (2021).
Kolberg, Jascha, and Christian Rathgeb, and Christoph Busch. “The Influence of Gender and Skin Colour on the Watchlist Imbalance Effect in Facial Identification Scenarios.” In Proceedings of the International Conference on Pattern Recognition (ICPR). 2022.
Galbally, Javier., Pasquale Ferrara, Rudolf Haraksim, Apostolos Psyllos, and Laurent Beslay. “Study on Face Identification Technology for its Implementation in the Schengen Information System.” Publications Office of the European Union (2019).
Terhörst, Philipp, Jan Niklas Kolf, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper. “Face Quality Estimation and its Correlation to Demographic and Non-demographic Bias in Face Recognition.” In IEEE International Joint Conference on Biometrics (IJCB), pp. 1-11. IEEE, 2020.
Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, pp. 77-91. PMLR, 2018.
Raji, Inioluwa Deborah, and Joy Buolamwini. “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 429-435. 2019.
Fang, Meiling, Wufei Yang, Arjan Kuijper, Vitomir Struc, and Naser Damer. “Fairness in Face Presentation Attack Detection.” arXiv preprint arXiv:2209.09035 (2022).
Abduh, Latifah, and Ioannis Ivrissimtzis. “Race Bias Analysis of Bona Fide Errors in Face Anti-spoofing.” arXiv preprint arXiv:2210.05366 (2022).
Fenu, Gianni, Hicham Lafhouli, and Mirko Marras. “Exploring Algorithmic Fairness in Deep Speaker Serification.” In International Conference on Computational Science and Its Applications, pp. 77-93. Springer, Cham, 2020.
Galbally, Javier, Rudolf Haraksim, and Laurent Beslay. “A Study of Age and Ageing in Fingerprint Biometrics.” IEEE Transactions on Information Forensics and Security (TIFS) 14, no. 5 (2018): 1351-1365.
Emanuela Marasco. Biases in Fingerprint Recognition Systems: Where are we at? In International Conference on Biometrics: Theory Applications and Systems (BTAS), pages 1–5. IEEE, September 2019.
Drozdowski, Pawel, Bernhard Prommegger, Georg Wimmer, Rudolf Schraml, Christian Rathgeb, Andreas Uhl, and Christoph Busch. “Demographic Bias: A Challenge for Fingervein Recognition Systems?.” In European Signal Processing Conference (EUSIPCO), pp. 825-829. IEEE, 2021.
Krishnan, Anoop, Ali Almadan, and Ajita Rattani. “Probing Fairness of Mobile Ocular Biometrics Methods across Gender on VISOB 2.0 Dataset.” In International Conference on Pattern Recognition, pp. 229-243. Springer, Cham, 2021.
Krishnan, Anoop, Ali Almadan, and Ajita Rattani. “Investigating Fairness of Ocular Biometrics Among Young, Middle-Aged, and Older Adults.” In International Carnahan Conference on Security Technology (ICCST), pp. 1-7. IEEE, 2021.
Vitek, Matej, Abhijit Das, Diego Rafael Lucio, Luiz Antonio Zanlorensi, David Menotti, Jalil Nourmohammadi Khiarak, Mohsen Akbari Shahpar et al. “Exploring Bias in Sclera Segmentation Models: A Group Evaluation Approach.” IEEE Transactions on Information Forensics and Security (TIFS) (2022).
Fang, Meiling, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper. “Demographic Bias in Presentation Attack Detection of Iris Recognition Systems.” In European Signal Processing Conference (EUSIPCO), pp. 835-839. IEEE, 2021.
Drozdowski, Pawel, Florian Struck, Christian Rathgeb, and Christoph Busch. “Detection of Glasses in Near-infrared Ocular Images.” In International Conference on Biometrics (ICB), pp. 202-208. IEEE, 2018.
Osorio Roig, Dailé, Pawel Drozdowski, Christian Rathgeb, A. Morales González, Eduardo Garea-Llano, and Christoph Busch. “Iris Recognition in Visible Wavelength: Impact and Automated Detection of Glasses.” In International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), pp. 542-546. IEEE, 2018.
Baker, Sarah E., Amanda Hentz, Kevin W. Bowyer, and Patrick J. Flynn. “Degradation of Iris Recognition Performance due to Non-cosmetic Prescription Contact Lenses.” Computer Vision and Image Understanding 114, no. 9 (2010): 1030-1044.
Dolgin, Elie. “The Myopia Boom.” Nature 519, no. 7543 (2015): 276.
Das, Abhijit, Antitza Dantcheva, and Francois Bremond. “Mitigating Bias in Gender, Age and Ethnicity Classification: A Multi-task Convolution Neural Network Approach.” In Proceedings of the European Conference on Computer Vision (ECCV) Workshops. 2018.
Wang, Mei, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. “Racial Faces in the Wild: Reducing Racial Bias by Information Maximization Adaptation Network.” In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 692-702. 2019.
Kortylewski, Adam, Bernhard Egger, Andreas Schneider, Thomas Gerig, Andreas Morel-Forster, and Thomas Vetter. “Analyzing and Reducing the Damage of Dataset Bias to Face Recognition with Synthetic Data.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2019.
Terhörst, Philipp, Mai Ly Tran, Naser Damer, Florian Kirchbuchner, and Arjan Kuijper. “Comparison-level Mitigation of Ethnic Bias in Face Recognition.” In International Workshop on Biometrics and Forensics (IWBF), pp. 1-6. IEEE, 2020.
Tan, Shuhan, Yujun Shen, and Bolei Zhou. “Improving the Fairness of Deep Generative Models without Retraining.” arXiv preprint arXiv:2012.04842 (2020).
Guo, Guodong, and Guowang Mu. “Human Age Estimation: What is the Influence Across Race and Gender?.” In IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 71-78. IEEE, 2010.
Nagpal, Shruti, Maneet Singh, Richa Singh, and Mayank Vatsa. “Deep Learning for Face Recognition: Pride or Prejudiced?.” arXiv preprint arXiv:1904.01219 (2019).
Preciozzi, Javier, Guillermo Garella, Vanina Camacho, Francesco Franzoni, Luis Di Martino, Guillermo Carbajal, and Alicia Fernandez. “Fingerprint Biometrics from Newborn to Adult: A Study from a National Identity Database System.” IEEE Transactions on Biometrics, Behavior, and Identity Science 2, no. 1 (2020): 68-79.
Haraksim, Rudolf, Javier Galbally, and Laurent Beslay. “Fingerprint Growth Model for Mitigating the Ageing Effect on Children’s Fingerprints Matching.” Pattern Recognition 88 (2019): 614-628.
Ribaric, Slobodan, and Nikola Pavešic. “De-Identification for Privacy Protection in Biometrics.” User-Centric Privacy and Security in Biometrics 4 (2017): 293.
Mirjalili, Vahid, Sebastian Raschka, Anoop Namboodiri, and Arun Ross. “Semi-adversarial Networks: Convolutional Autoencoders for Imparting Privacy to Face Images.” In International Conference on Biometrics (ICB), pp. 82-89. IEEE, 2018.
Acien, Alejandro, Aythami Morales, Ruben Vera-Rodriguez, Ivan Bartolome, and Julian Fierrez. “Measuring the Gender and Ethnicity Bias in Deep Models for Face Recognition.” In Iberoamerican Congress on Pattern Recognition, pp. 584-593. Springer, Cham, 2018.
Morales, Aythami, Julian Fierrez, Ruben Vera-Rodriguez, and Ruben Tolosana. “Sensitivenets: Learning Agnostic Representations with Application to Face Images.” IEEE Transactions on Pattern Analysis and Machine Intelligence 43, no. 6 (2020): 2158-2164.
Verma, Sahil, and Julia Rubin. “Fairness Definitions Explained.” In IEEE/ACM International Workshop on Software Fairness (Fairware), pp. 1-7. IEEE, 2018.
Hutchinson, Ben, and Margaret Mitchell. “50 Years of Test (Un)fairness: Lessons for Machine Learning.” In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 49-58. 2019.
Friedler, Sorelle A., Carlos Scheidegger, and Suresh Venkatasubramanian. “On the (Im)possibility of Fairness.” arXiv preprint arXiv:1609.07236 (2016).
Liu, Lydia T., Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. “Delayed Impact of Fair Machine Learning.” In International Conference on Machine Learning, pp. 3150-3158. PMLR, 2018.
Green, Ben, and Lily Hu. “The Myth in the Methodology: Towards a Recontextualization of Fairness in Machine Learning.” In Proceedings of the Machine Learning: The Debates Workshop. 2018.
Kirkpatrick, Keith. “Battling Algorithmic Bias: How do we Ensure Algorithms Treat us Fairly?.” Communications of the ACM 59, no. 10 (2016): 16-17.
Hallowell, Nina, Louise Amoore, Simon Caney, and Peter Waggett. “Ethical Issues Arising from the Police Use of Live Facial Recognition Technology.” Interim Report of the Biometrics and Forensics Ethics Group Facial Recognition Working Group, Rep (2019).
Bryson, Joanna, and Alan Winfield. “Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems.” Computer 50, no. 5 (2017): 116-119.
European Council. “Regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation)”. 2016.
Goodman, Bryce, and Seth Flaxman. “European Union Regulations on Algorithmic Decision-making and a “Right to Explanation”.” AI Magazine 38, no. 3 (2017): 50-57.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kolberg, J. Fairness von Biometrischen Systemen. Datenschutz Datensich 47, 15–21 (2023). https://doi.org/10.1007/s11623-022-1709-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11623-022-1709-1