{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,11,19]],"date-time":"2024-11-19T18:59:43Z","timestamp":1732042783949,"version":"3.28.0"},"publisher-location":"New York, NY, USA","reference-count":120,"publisher":"ACM","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,6,12]]},"DOI":"10.1145\/3593013.3594039","type":"proceedings-article","created":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T10:40:46Z","timestamp":1686566446000},"page":"736-752","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Add-Remove-or-Relabel: Practitioner-Friendly Bias Mitigation via Influential Fairness"],"prefix":"10.1145","author":[{"ORCID":"http:\/\/orcid.org\/0000-0002-6465-9192","authenticated-orcid":false,"given":"Brianna","family":"Richardson","sequence":"first","affiliation":[{"name":"University of Florida, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0003-4435-0486","authenticated-orcid":false,"given":"Prasanna","family":"Sattigeri","sequence":"additional","affiliation":[{"name":"IBM, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-6510-1537","authenticated-orcid":false,"given":"Dennis","family":"Wei","sequence":"additional","affiliation":[{"name":"IBM, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-6021-5930","authenticated-orcid":false,"given":"Karthikeyan Natesan","family":"Ramamurthy","sequence":"additional","affiliation":[{"name":"IBM, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-7376-5536","authenticated-orcid":false,"given":"Kush","family":"Varshney","sequence":"additional","affiliation":[{"name":"IBM, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-3579-1450","authenticated-orcid":false,"given":"Amit","family":"Dhurandhar","sequence":"additional","affiliation":[{"name":"IBM, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-6801-2206","authenticated-orcid":false,"given":"Juan E.","family":"Gilbert","sequence":"additional","affiliation":[{"name":"University of Florida, USA"}]}],"member":"320","published-online":{"date-parts":[[2023,6,12]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1007\/S10115-017-1116-3"},{"key":"e_1_3_2_1_2_1","volume-title":"Proceedings of the International Conference on Machine Learning. 102\u2013119","author":"Agarwal Alekh","year":"2018","unstructured":"Alekh Agarwal , Alina Beygelzimer , Miroslav Dud\u00edk , John Langford , and Hanna Wallach . 2018 . A Reductions Approach to Fair Classification . In Proceedings of the International Conference on Machine Learning. 102\u2013119 . arxiv:1803.02453 Alekh Agarwal, Alina Beygelzimer, Miroslav Dud\u00edk, John Langford, and Hanna Wallach. 2018. A Reductions Approach to Fair Classification. In Proceedings of the International Conference on Machine Learning. 102\u2013119. arxiv:1803.02453"},{"key":"e_1_3_2_1_3_1","volume-title":"Proceedings of the International Conference on Machine Learning. 120\u2013129","author":"Agarwal Alekh","year":"2019","unstructured":"Alekh Agarwal , Miroslav Dudik , and Zhiwei Steven Wu . 2019 . Fair Regression: Quantitative Definitions and Reduction-Based Algorithms . In Proceedings of the International Conference on Machine Learning. 120\u2013129 . Alekh Agarwal, Miroslav Dudik, and Zhiwei Steven Wu. 2019. Fair Regression: Quantitative Definitions and Reduction-Based Algorithms. In Proceedings of the International Conference on Machine Learning. 120\u2013129."},{"key":"e_1_3_2_1_4_1","volume-title":"36th International Conference on Machine Learning. PMLR, 191\u2013201","author":"Alaa Ahmed M","year":"2019","unstructured":"Ahmed M Alaa and Mihaela Van Der Schaar . 2019 . Validating Causal Inference Models via Influence Functions . In 36th International Conference on Machine Learning. PMLR, 191\u2013201 . https:\/\/proceedings.mlr.press\/v97\/alaa19a.html Ahmed M Alaa and Mihaela Van Der Schaar. 2019. Validating Causal Inference Models via Influence Functions. In 36th International Conference on Machine Learning. PMLR, 191\u2013201. https:\/\/proceedings.mlr.press\/v97\/alaa19a.html"},{"key":"e_1_3_2_1_6_1","volume-title":"Then What is the Question? (sep","author":"Bae Juhan","year":"2022","unstructured":"Juhan Bae , Nathan Ng , Alston Lo , Marzyeh Ghassemi , and Roger Grosse . 2022. If Influence Functions are the Answer , Then What is the Question? (sep 2022 ). https:\/\/doi.org\/10.48550\/arxiv.2209.05364 arxiv:2209.05364 10.48550\/arxiv.2209.05364 Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger Grosse. 2022. If Influence Functions are the Answer, Then What is the Question? (sep 2022). https:\/\/doi.org\/10.48550\/arxiv.2209.05364 arxiv:2209.05364"},{"key":"e_1_3_2_1_7_1","volume-title":"Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation. In Bloomberg Data for Good Exchange Conference. arxiv:1710","author":"Bantilan Niels","year":"2017","unstructured":"Niels Bantilan . 2017 . Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation. In Bloomberg Data for Good Exchange Conference. arxiv:1710 .06921v1 Niels Bantilan. 2017. Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation. In Bloomberg Data for Good Exchange Conference. arxiv:1710.06921v1"},{"key":"e_1_3_2_1_8_1","unstructured":"Solon Barocas Moritz Hardt and Arvind Narayanan. 2019. Fairness and machine learning. fairmlbook.org. https:\/\/fairmlbook.org\/index.html Solon Barocas Moritz Hardt and Arvind Narayanan. 2019. Fairness and machine learning. fairmlbook.org. https:\/\/fairmlbook.org\/index.html"},{"key":"e_1_3_2_1_9_1","volume-title":"Selbst","author":"Barocas Solon","year":"2016","unstructured":"Solon Barocas and Andrew D . Selbst . 2016 . Big Data\u2019s Disparate Impact. SSRN Electronic Journal 104 (mar 2016), 671\u2013732. https:\/\/doi.org\/10.2139\/ssrn.2477899 10.2139\/ssrn.2477899 Solon Barocas and Andrew D. Selbst. 2016. Big Data\u2019s Disparate Impact. SSRN Electronic Journal 104 (mar 2016), 671\u2013732. https:\/\/doi.org\/10.2139\/ssrn.2477899"},{"key":"e_1_3_2_1_10_1","volume-title":"Proceedings of Machine Learning Research 108 (mar 2020","author":"Barshan Elnaz","year":"2020","unstructured":"Elnaz Barshan , Marc-Etienne Brunet , and Gintare Karolina Dziugaite . 2020 . RelatIF: Identifying Explanatory Training Examples via Relative Influence . Proceedings of Machine Learning Research 108 (mar 2020 ), 26\u201328. https:\/\/doi.org\/10.48550\/arxiv.2003.11630 arxiv:2003.11630 10.48550\/arxiv.2003.11630 Elnaz Barshan, Marc-Etienne Brunet, and Gintare Karolina Dziugaite. 2020. RelatIF: Identifying Explanatory Training Examples via Relative Influence. Proceedings of Machine Learning Research 108 (mar 2020), 26\u201328. https:\/\/doi.org\/10.48550\/arxiv.2003.11630 arxiv:2003.11630"},{"key":"e_1_3_2_1_11_1","volume-title":"Influence Functions in Deep Learning Are Fragile. (jun","author":"Basu Samyadeep","year":"2020","unstructured":"Samyadeep Basu , Phillip Pope , and Soheil Feizi . 2020. Influence Functions in Deep Learning Are Fragile. (jun 2020 ). https:\/\/doi.org\/10.48550\/arxiv.2006.14651 arxiv:2006.14651 10.48550\/arxiv.2006.14651 Samyadeep Basu, Phillip Pope, and Soheil Feizi. 2020. Influence Functions in Deep Learning Are Fragile. (jun 2020). https:\/\/doi.org\/10.48550\/arxiv.2006.14651 arxiv:2006.14651"},{"key":"e_1_3_2_1_12_1","volume-title":"On Second-Order Group Influence Functions for Black-Box Predictions. In ICML\u201920: Proceedings of the 37th International Conference on Machine Learning. 715\u2013724","author":"Basu Samyadeep","year":"2020","unstructured":"Samyadeep Basu , Xuchen You , and Soheil Feizi . 2020 . On Second-Order Group Influence Functions for Black-Box Predictions. In ICML\u201920: Proceedings of the 37th International Conference on Machine Learning. 715\u2013724 . https:\/\/doi.org\/10.5555\/3524938 10.5555\/3524938 Samyadeep Basu, Xuchen You, and Soheil Feizi. 2020. On Second-Order Group Influence Functions for Black-Box Predictions. In ICML\u201920: Proceedings of the 37th International Conference on Machine Learning. 715\u2013724. https:\/\/doi.org\/10.5555\/3524938"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00041"},{"volume-title":"Proceedings of Kdd Cup and Workshop.","author":"Bennett J.","key":"e_1_3_2_1_14_1","unstructured":"J. Bennett and S. Lanning . 2007. The Netflix Prize . In Proceedings of Kdd Cup and Workshop. J. Bennett and S. Lanning. 2007. The Netflix Prize. In Proceedings of Kdd Cup and Workshop."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.5555\/3157382.3157584"},{"key":"e_1_3_2_1_16_1","volume-title":"Machine Unlearning. Proceedings - IEEE Symposium on Security and Privacy 2021-May (dec 2019","author":"Bourtoule Lucas","year":"2019","unstructured":"Lucas Bourtoule , Varun Chandrasekaran , Christopher A. Choquette-Choo , Hengrui Jia , Adelin Travers , Baiwu Zhang , David Lie , and Nicolas Papernot . 2019 . Machine Unlearning. Proceedings - IEEE Symposium on Security and Privacy 2021-May (dec 2019 ), 141\u2013159. https:\/\/doi.org\/10.48550\/arxiv.1912.03817 arxiv:1912.03817 10.48550\/arxiv.1912.03817 Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2019. Machine Unlearning. Proceedings - IEEE Symposium on Security and Privacy 2021-May (dec 2019), 141\u2013159. https:\/\/doi.org\/10.48550\/arxiv.1912.03817 arxiv:1912.03817"},{"key":"e_1_3_2_1_17_1","volume-title":"Individually Fair Rankings. In International Conference on Learning Representations.","author":"Bower Amanda","year":"2021","unstructured":"Amanda Bower , Hamid Eftekhari , Mikhail Yurochkin , and Yuekai Sun . 2021 . Individually Fair Rankings. In International Conference on Learning Representations. Amanda Bower, Hamid Eftekhari, Mikhail Yurochkin, and Yuekai Sun. 2021. Individually Fair Rankings. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_18_1","volume-title":"Understanding the Origins of Bias in Word Embeddings. 36th International Conference on Machine Learning, ICML 2019 2019-June (oct 2018","author":"Brunet Marc Etienne","year":"2018","unstructured":"Marc Etienne Brunet , Colleen Alkalay-Houlihan , Ashton Anderson , and Richard Zemel . 2018 . Understanding the Origins of Bias in Word Embeddings. 36th International Conference on Machine Learning, ICML 2019 2019-June (oct 2018 ), 1275\u20131294. https:\/\/doi.org\/10.48550\/arxiv.1810.03611 arxiv:1810.03611 10.48550\/arxiv.1810.03611 Marc Etienne Brunet, Colleen Alkalay-Houlihan, Ashton Anderson, and Richard Zemel. 2018. Understanding the Origins of Bias in Word Embeddings. 36th International Conference on Machine Learning, ICML 2019 2019-June (oct 2018), 1275\u20131294. https:\/\/doi.org\/10.48550\/arxiv.1810.03611 arxiv:1810.03611"},{"key":"e_1_3_2_1_19_1","volume-title":"Langton","author":"Buil-Gil David","year":"2021","unstructured":"David Buil-Gil , Angelo Moretti , and Samuel H . Langton . 2021 . The accuracy of crime statistics: assessing the impact of police data bias on geographic crime analysis. Journal of Experimental Criminology ( mar 2021), 1\u201327. https:\/\/doi.org\/10.1007\/S11292-021-09457-Y\/TABLES\/9 10.1007\/S11292-021-09457-Y David Buil-Gil, Angelo Moretti, and Samuel H. Langton. 2021. The accuracy of crime statistics: assessing the impact of police data bias on geographic crime analysis. Journal of Experimental Criminology (mar 2021), 1\u201327. https:\/\/doi.org\/10.1007\/S11292-021-09457-Y\/TABLES\/9"},{"key":"e_1_3_2_1_20_1","first-page":"1","article-title":"Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification *","volume":"81","author":"Buolamwini Joy","year":"2018","unstructured":"Joy Buolamwini and Timnit Gebru . 2018 . Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification * . In Proceedings of Machine Learning Research , Vol. 81. 1 \u2013 15 . Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification *. In Proceedings of Machine Learning Research, Vol. 81. 1\u201315.","journal-title":"Proceedings of Machine Learning Research"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"crossref","DOI":"10.3386\/w28328","volume-title":"Machine Learning and Perceived Age Stereotypes in Job Ads: Evidence from an Experiment. (jan","author":"Burn Ian","year":"2021","unstructured":"Ian Burn , Daniel Firoozi , Daniel Ladd , and David Neumark . 2021. Machine Learning and Perceived Age Stereotypes in Job Ads: Evidence from an Experiment. (jan 2021 ). https:\/\/doi.org\/10.3386\/W28328 10.3386\/W28328 Ian Burn, Daniel Firoozi, Daniel Ladd, and David Neumark. 2021. Machine Learning and Perceived Age Stereotypes in Job Ads: Evidence from an Experiment. (jan 2021). https:\/\/doi.org\/10.3386\/W28328"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10618-010-0190-x"},{"volume-title":"Studies in Applied Philosophy, Epistemology and Rational Ethics.","author":"Calders Toon","key":"e_1_3_2_1_23_1","unstructured":"Toon Calders and Indr\u0117 \u017dliobait\u0117 . 2013. Why unbiased computational processes can lead to discriminative decision procedures . In Studies in Applied Philosophy, Epistemology and Rational Ethics. Vol. 3 . Springer International Publishing , 43\u201357. https:\/\/doi.org\/10.1007\/978-3-642-30487-3_3 10.1007\/978-3-642-30487-3_3 Toon Calders and Indr\u0117 \u017dliobait\u0117. 2013. Why unbiased computational processes can lead to discriminative decision procedures. In Studies in Applied Philosophy, Epistemology and Rational Ethics. Vol. 3. Springer International Publishing, 43\u201357. https:\/\/doi.org\/10.1007\/978-3-642-30487-3_3"},{"key":"e_1_3_2_1_24_1","volume-title":"Fairness in Machine Learning: A Survey. (oct","author":"Caton Simon","year":"2020","unstructured":"Simon Caton and Christian Haas . 2020. Fairness in Machine Learning: A Survey. (oct 2020 ). https:\/\/doi.org\/10.48550\/arxiv.2010.04053 arxiv:2010.04053 10.48550\/arxiv.2010.04053 Simon Caton and Christian Haas. 2020. Fairness in Machine Learning: A Survey. (oct 2020). https:\/\/doi.org\/10.48550\/arxiv.2010.04053 arxiv:2010.04053"},{"key":"e_1_3_2_1_25_1","volume-title":"Vishnoi","author":"Celis L. Elisa","year":"2016","unstructured":"L. Elisa Celis , Amit Deshpande , Tarun Kathuria , and Nisheeth K . Vishnoi . 2016 . How to be Fair and Diverse ? (oct 2016). https:\/\/doi.org\/10.48550\/arxiv.1610.07183 arxiv:1610.07183 10.48550\/arxiv.1610.07183 L. Elisa Celis, Amit Deshpande, Tarun Kathuria, and Nisheeth K. Vishnoi. 2016. How to be Fair and Diverse? (oct 2016). https:\/\/doi.org\/10.48550\/arxiv.1610.07183 arxiv:1610.07183"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"crossref","first-page":"651","DOI":"10.1007\/978-981-13-1498-8_57","article-title":"Flight arrival delay prediction using gradient boosting classifier","volume":"813","author":"Chakrabarty Navoneel","year":"2019","unstructured":"Navoneel Chakrabarty , Tuhin Kundu , Sudipta Dandapat , Apurba Sarkar , and Dipak Kumar Kole . 2019 . Flight arrival delay prediction using gradient boosting classifier . Advances in Intelligent Systems and Computing 813 (2019), 651 \u2013 659 . https:\/\/doi.org\/10.1007\/978-981-13-1498-8_57\/COVER 10.1007\/978-981-13-1498-8_57 Navoneel Chakrabarty, Tuhin Kundu, Sudipta Dandapat, Apurba Sarkar, and Dipak Kumar Kole. 2019. Flight arrival delay prediction using gradient boosting classifier. Advances in Intelligent Systems and Computing 813 (2019), 651\u2013659. https:\/\/doi.org\/10.1007\/978-981-13-1498-8_57\/COVER","journal-title":"Advances in Intelligent Systems and Computing"},{"key":"e_1_3_2_1_27_1","volume-title":"Multi-Stage Influence Function. In 34th International Conference on Neural Information Processing Systems. 12732\u201312742","author":"Chen Hongge","year":"2020","unstructured":"Hongge Chen , Si Si , Yang Li , Ciprian Chelba , Sanjiv Kumar , Duane Boning , and Cho-Jui Hsieh . 2020 . Multi-Stage Influence Function. In 34th International Conference on Neural Information Processing Systems. 12732\u201312742 . https:\/\/doi.org\/10.5555\/3495724.3496792 10.5555\/3495724.3496792 Hongge Chen, Si Si, Yang Li, Ciprian Chelba, Sanjiv Kumar, Duane Boning, and Cho-Jui Hsieh. 2020. Multi-Stage Influence Function. In 34th International Conference on Neural Information Processing Systems. 12732\u201312742. https:\/\/doi.org\/10.5555\/3495724.3496792"},{"key":"e_1_3_2_1_28_1","volume-title":"Why Is My Classifier Discriminatory?Advances in Neural Information Processing Systems 2018-December (may","author":"Chen Irene Y.","year":"2018","unstructured":"Irene Y. Chen , Fredrik D. Johansson , and David Sontag . 2018. Why Is My Classifier Discriminatory?Advances in Neural Information Processing Systems 2018-December (may 2018 ), 3539\u20133550. https:\/\/doi.org\/10.48550\/arxiv.1805.12002 arxiv:1805.12002 10.48550\/arxiv.1805.12002 Irene Y. Chen, Fredrik D. Johansson, and David Sontag. 2018. Why Is My Classifier Discriminatory?Advances in Neural Information Processing Systems 2018-December (may 2018), 3539\u20133550. https:\/\/doi.org\/10.48550\/arxiv.1805.12002 arxiv:1805.12002"},{"key":"e_1_3_2_1_29_1","volume-title":"Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (jul","author":"Cheng Weiyu","year":"2019","unstructured":"Weiyu Cheng , Linpeng Huang , Yanyan Shen , and Yanmin Zhu . 2019 . Incorporating interpretability into latent factor models via fast influence analysis . Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (jul 2019), 885\u2013893. https:\/\/doi.org\/10.1145\/3292500.3330857 10.1145\/3292500.3330857 Weiyu Cheng, Linpeng Huang, Yanyan Shen, and Yanmin Zhu. 2019. Incorporating interpretability into latent factor models via fast influence analysis. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (jul 2019), 885\u2013893. https:\/\/doi.org\/10.1145\/3292500.3330857"},{"key":"e_1_3_2_1_30_1","first-page":"4","article-title":"Characterizations of an Empirical Influence Function for Detecting Influential Cases in Regression","volume":"22","author":"Dennis Cook R.","year":"1980","unstructured":"R. Dennis Cook and Sanford Weisberg . 1980 . Characterizations of an Empirical Influence Function for Detecting Influential Cases in Regression . Technometrics 22 , 4 (nov 1980), 495. https:\/\/doi.org\/10.2307\/1268187 10.2307\/1268187 R. Dennis Cook and Sanford Weisberg. 1980. Characterizations of an Empirical Influence Function for Detecting Influential Cases in Regression. Technometrics 22, 4 (nov 1980), 495. https:\/\/doi.org\/10.2307\/1268187","journal-title":"Technometrics"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3306618.3314236"},{"key":"e_1_3_2_1_32_1","unstructured":"Bo Cowgill and Catherine Tucker. 2017. Algorithmic Bias : A Counterfactual Perspective. In NSF Trustworthy Algorithms. Bo Cowgill and Catherine Tucker. 2017. Algorithmic Bias : A Counterfactual Perspective. In NSF Trustworthy Algorithms."},{"key":"e_1_3_2_1_33_1","unstructured":"Kate Crawford. 2017. The Trouble with Bias. https:\/\/www.youtube.com\/watch?v=fMym_BKWQzk Kate Crawford. 2017. The Trouble with Bias. https:\/\/www.youtube.com\/watch?v=fMym_BKWQzk"},{"key":"e_1_3_2_1_34_1","unstructured":"Pietro G. Di Stefano James M. Hickey and Vlasios Vasileiou. 2020. Counterfactual fairness: removing direct effects through regularization. (2020). arxiv:2002.10774http:\/\/arxiv.org\/abs\/2002.10774 Pietro G. Di Stefano James M. Hickey and Vlasios Vasileiou. 2020. Counterfactual fairness: removing direct effects through regularization. (2020). arxiv:2002.10774http:\/\/arxiv.org\/abs\/2002.10774"},{"key":"e_1_3_2_1_35_1","volume-title":"31st International Conference on Machine Learning. 2016\u20132024","author":"Du Nan","year":"2014","unstructured":"Nan Du , Yingyu Liang , Maria-Florina Balcan , and Le Song . 2014 . Influence function learning in information diffusion networks | Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32 . In 31st International Conference on Machine Learning. 2016\u20132024 . https:\/\/dl.acm.org\/doi\/abs\/10.5555\/3044805.3045117 Nan Du, Yingyu Liang, Maria-Florina Balcan, and Le Song. 2014. Influence function learning in information diffusion networks | Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32. In 31st International Conference on Machine Learning. 2016\u20132024. https:\/\/dl.acm.org\/doi\/abs\/10.5555\/3044805.3045117"},{"key":"e_1_3_2_1_36_1","unstructured":"Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http:\/\/archive.ics.uci.edu\/ml Dheeru Dua and Casey Graff. 2017. UCI Machine Learning Repository. http:\/\/archive.ics.uci.edu\/ml"},{"key":"e_1_3_2_1_39_1","first-page":"741","volume-title":"Entropy 2019","volume":"21","author":"Fitzsimons Jack","year":"2019","unstructured":"Jack Fitzsimons , Abdul Rahman Al Ali , Michael Osborne , and Stephen Roberts . 2019 . A General Framework for Fair Regression . Entropy 2019 , Vol. 21 , Page 741 21, 8 (jul 2019), 741. https:\/\/doi.org\/10.3390\/E21080741 arxiv:1810.05041 10.3390\/E21080741 Jack Fitzsimons, Abdul Rahman Al Ali, Michael Osborne, and Stephen Roberts. 2019. A General Framework for Fair Regression. Entropy 2019, Vol. 21, Page 741 21, 8 (jul 2019), 741. https:\/\/doi.org\/10.3390\/E21080741 arxiv:1810.05041"},{"key":"e_1_3_2_1_40_1","volume-title":"Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA internal medicine 178, 11 (nov","author":"Gianfrancesco Milena A.","year":"2018","unstructured":"Milena A. Gianfrancesco , Suzanne Tamang , Jinoos Yazdany , and Gabriela Schmajuk . 2018. Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA internal medicine 178, 11 (nov 2018 ), 1544. https:\/\/doi.org\/10.1001\/JAMAINTERNMED.2018.3763 10.1001\/JAMAINTERNMED.2018.3763 Milena A. Gianfrancesco, Suzanne Tamang, Jinoos Yazdany, and Gabriela Schmajuk. 2018. Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA internal medicine 178, 11 (nov 2018), 1544. https:\/\/doi.org\/10.1001\/JAMAINTERNMED.2018.3763"},{"key":"e_1_3_2_1_41_1","volume-title":"FASTIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging. EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (2021","author":"Guo Han","year":"2021","unstructured":"Han Guo , Nazneen Fatema Rajani , Peter Hase , Mohit Bansal , and Caiming Xiong . 2021 . FASTIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging. EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (2021 ), 10333\u201310350. https:\/\/doi.org\/10.18653\/V1\/2021.EMNLP-MAIN.808 arxiv:2012.15781 10.18653\/V1 Han Guo, Nazneen Fatema Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. 2021. FASTIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging. EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (2021), 10333\u201310350. https:\/\/doi.org\/10.18653\/V1\/2021.EMNLP-MAIN.808 arxiv:2012.15781"},{"key":"e_1_3_2_1_42_1","volume-title":"Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining","volume":"2016","author":"Hajian Sara","year":"2016","unstructured":"Sara Hajian , Francesco Bonchi , and Carlos Castillo . 2016 . Algorithmic bias: From discrimination discovery to fairness-aware data mining . In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , Vol. 13-17-August- 2016 . Association for Computing Machinery, New York, NY, USA, 2125\u20132126. https:\/\/doi.org\/10.1145\/2939672.2945386 10.1145\/2939672.2945386 Sara Hajian, Francesco Bonchi, and Carlos Castillo. 2016. Algorithmic bias: From discrimination discovery to fairness-aware data mining. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Vol. 13-17-August-2016. Association for Computing Machinery, New York, NY, USA, 2125\u20132126. https:\/\/doi.org\/10.1145\/2939672.2945386"},{"key":"e_1_3_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2012.72"},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1080\/01621459.1974.10482962"},{"key":"e_1_3_2_1_45_1","volume-title":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,. Association for Computational Linguistics (ACL), 5553\u20135563","author":"Han Xiaochuang","year":"2020","unstructured":"Xiaochuang Han , Byron C. Wallace , and Yulia Tsvetkov . 2020 . Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions . In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,. Association for Computational Linguistics (ACL), 5553\u20135563 . https:\/\/doi.org\/10.48550\/arxiv.2005.06676 arxiv:2005.06676 10.48550\/arxiv.2005.06676 Xiaochuang Han, Byron C. Wallace, and Yulia Tsvetkov. 2020. Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,. Association for Computational Linguistics (ACL), 5553\u20135563. https:\/\/doi.org\/10.48550\/arxiv.2005.06676 arxiv:2005.06676"},{"key":"e_1_3_2_1_46_1","volume-title":"Data Cleansing for Models Trained with SGD. Advances in Neural Information Processing Systems 32 (jun","author":"Hara Satoshi","year":"2019","unstructured":"Satoshi Hara , Atsushi Nitanda , and Takanori Maehara . 2019. Data Cleansing for Models Trained with SGD. Advances in Neural Information Processing Systems 32 (jun 2019 ). https:\/\/doi.org\/10.48550\/arxiv.1906.08473 arxiv:1906.08473 10.48550\/arxiv.1906.08473 Satoshi Hara, Atsushi Nitanda, and Takanori Maehara. 2019. Data Cleansing for Models Trained with SGD. Advances in Neural Information Processing Systems 32 (jun 2019). https:\/\/doi.org\/10.48550\/arxiv.1906.08473 arxiv:1906.08473"},{"key":"e_1_3_2_1_48_1","volume-title":"Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need. In CHI Conference on Human Factors in Computing Systems. ACM. https:\/\/doi.org\/10","author":"Holstein Kenneth","year":"2019","unstructured":"Kenneth Holstein , Jennifer Wortman Vaughan , Hal Daum\u00e9 III , Miroslav Dud\u00edk , and Hanna Wallach . 2019 . Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need. In CHI Conference on Human Factors in Computing Systems. ACM. https:\/\/doi.org\/10 .1145\/3290605.3300830 arxiv:1812.05239v2 10.1145\/3290605.3300830 Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daum\u00e9 III, Miroslav Dud\u00edk, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need. In CHI Conference on Human Factors in Computing Systems. ACM. https:\/\/doi.org\/10.1145\/3290605.3300830 arxiv:1812.05239v2"},{"key":"e_1_3_2_1_49_1","volume-title":"Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019 (feb 2020","author":"Iosifidis Vasileios","year":"2020","unstructured":"Vasileios Iosifidis , Besnik Fetahu , and Eirini Ntoutsi . 2020 . FAE: A Fairness-Aware Ensemble Framework . Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019 (feb 2020 ), 1375\u20131380. https:\/\/doi.org\/10.48550\/arxiv.2002.00695 arxiv:2002.00695 10.48550\/arxiv.2002.00695 Vasileios Iosifidis, Besnik Fetahu, and Eirini Ntoutsi. 2020. FAE: A Fairness-Aware Ensemble Framework. Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019 (feb 2020), 1375\u20131380. https:\/\/doi.org\/10.48550\/arxiv.2002.00695 arxiv:2002.00695"},{"key":"e_1_3_2_1_50_1","unstructured":"Heinrich Jiang and Ofir Nachum. 2019. Identifying and Correcting Label Bias in Machine Learning. (2019). arxiv:1901.04966http:\/\/arxiv.org\/abs\/1901.04966 Heinrich Jiang and Ofir Nachum. 2019. Identifying and Correcting Label Bias in Machine Learning. (2019). arxiv:1901.04966http:\/\/arxiv.org\/abs\/1901.04966"},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0088-2"},{"key":"e_1_3_2_1_52_1","first-page":"1","article-title":"Performance Evaluation of Regression Models for the Prediction of the COVID-19 Reproduction Rate","author":"Kaliappan Jayakumar","year":"2021","unstructured":"Jayakumar Kaliappan , Kathiravan Srinivasan , Saeed Mian Qaisar , Karpagam Sundararajan , Chuan Yu Chang , and C. Suganthan . 2021 . Performance Evaluation of Regression Models for the Prediction of the COVID-19 Reproduction Rate . Frontiers in Public Health 9 , September (2021), 1 \u2013 12 . https:\/\/doi.org\/10.3389\/fpubh.2021.729795 10.3389\/fpubh.2021.729795 Jayakumar Kaliappan, Kathiravan Srinivasan, Saeed Mian Qaisar, Karpagam Sundararajan, Chuan Yu Chang, and C. Suganthan. 2021. Performance Evaluation of Regression Models for the Prediction of the COVID-19 Reproduction Rate. Frontiers in Public Health 9, September (2021), 1\u201312. https:\/\/doi.org\/10.3389\/fpubh.2021.729795","journal-title":"Frontiers in Public Health 9"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/IC4.2009.4909197"},{"key":"e_1_3_2_1_54_1","volume-title":"Decision Theory for Discrimination-aware Classification. In IEEE 12th International Conference on Data Mining. 924\u2013929","author":"Kamiran Faisal","year":"2012","unstructured":"Faisal Kamiran , Asim Karim , and Xiangliang Zhang . 2012 . Decision Theory for Discrimination-aware Classification. In IEEE 12th International Conference on Data Mining. 924\u2013929 . https:\/\/doi.org\/10.1109\/ICDM.2012.45 10.1109\/ICDM.2012.45 Faisal Kamiran, Asim Karim, and Xiangliang Zhang. 2012. Decision Theory for Discrimination-aware Classification. In IEEE 12th International Conference on Data Mining. 924\u2013929. https:\/\/doi.org\/10.1109\/ICDM.2012.45"},{"key":"e_1_3_2_1_55_1","series-title":"Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)","volume-title":"Fairness-aware classifier with prejudice remover regularizer","author":"Kamishima Toshihiro","unstructured":"Toshihiro Kamishima , Shotaro Akaho , Hideki Asoh , and Jun Sakuma . 2012. Fairness-aware classifier with prejudice remover regularizer . In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , Vol. 7524 LNAI. Springer, Berlin , Heidelberg , 35\u201350. https:\/\/doi.org\/10.1007\/978-3-642-33486-3_3 10.1007\/978-3-642-33486-3_3 Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2012. Fairness-aware classifier with prejudice remover regularizer. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 7524 LNAI. Springer, Berlin, Heidelberg, 35\u201350. https:\/\/doi.org\/10.1007\/978-3-642-33486-3_3"},{"key":"e_1_3_2_1_56_1","volume-title":"Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021 (2021","author":"Karkkainen Kimmo","year":"2021","unstructured":"Kimmo Karkkainen and Jungseock Joo . 2021 . FairFace: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation . Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021 (2021 ), 1547\u20131557. https:\/\/doi.org\/10.1109\/WACV48630.2021.00159 10.1109\/WACV48630.2021.00159 Kimmo Karkkainen and Jungseock Joo. 2021. FairFace: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021 (2021), 1547\u20131557. https:\/\/doi.org\/10.1109\/WACV48630.2021.00159"},{"key":"e_1_3_2_1_57_1","volume-title":"Interpreting Interpretability: Understanding Data Scientists\u2019 Use of Interpretability Tools for Machine Learning. In CHI Conference on Human Factors in Computing Systems. https:\/\/doi.org\/10","author":"Kaur Harmanpreet","year":"2020","unstructured":"Harmanpreet Kaur , Harsha Nori , Samuel Jenkins , Rich Caruana , Hanna Wallach , and Jennifer Wortman Vaughan . 2020 . Interpreting Interpretability: Understanding Data Scientists\u2019 Use of Interpretability Tools for Machine Learning. In CHI Conference on Human Factors in Computing Systems. https:\/\/doi.org\/10 .1145\/3313831.3376219 10.1145\/3313831.3376219 Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists\u2019 Use of Interpretability Tools for Machine Learning. In CHI Conference on Human Factors in Computing Systems. https:\/\/doi.org\/10.1145\/3313831.3376219"},{"key":"e_1_3_2_1_58_1","volume-title":"Twenty-Second International Conference on Artificial Intelligence and Statistics. PMLR, 3382\u20133390","author":"Khanna Rajiv","year":"2019","unstructured":"Rajiv Khanna , Been Kim , Joydeep Ghosh , and Oluwasanmi Koyejo . 2019 . Interpreting Black Box Predictions using Fisher Kernels . In Twenty-Second International Conference on Artificial Intelligence and Statistics. PMLR, 3382\u20133390 . https:\/\/proceedings.mlr.press\/v89\/khanna19a.html Rajiv Khanna, Been Kim, Joydeep Ghosh, and Oluwasanmi Koyejo. 2019. Interpreting Black Box Predictions using Fisher Kernels. In Twenty-Second International Conference on Artificial Intelligence and Statistics. PMLR, 3382\u20133390. https:\/\/proceedings.mlr.press\/v89\/khanna19a.html"},{"key":"e_1_3_2_1_59_1","volume-title":"Proceedings of the 2017 Advances in Neural Information Processing Systems","volume":"30","author":"Kilbertus Niki","year":"2017","unstructured":"Niki Kilbertus , Mateo Rojas-Carulla , Giambattista Parascandolo , Moritz Hardt , Dominik Janzing , and Bernhard Sch\u00f6lkopf . 2017 . Avoiding Discrimination through Causal Reasoning . In Proceedings of the 2017 Advances in Neural Information Processing Systems , Vol. 30 . Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Sch\u00f6lkopf. 2017. Avoiding Discrimination through Causal Reasoning. In Proceedings of the 2017 Advances in Neural Information Processing Systems, Vol. 30."},{"key":"e_1_3_2_1_60_1","first-page":"2","article-title":"Efficient Estimation of Influence of a Training Instance","volume":"28","author":"Kobayashi Sosuke","year":"2020","unstructured":"Sosuke Kobayashi , Sho Yokoi , Jun Suzuki , and Kentaro Inui . 2020 . Efficient Estimation of Influence of a Training Instance . Journal of Natural Language Processing 28 , 2 (dec 2020), 573\u2013597. https:\/\/doi.org\/10.48550\/arxiv.2012.04207 arxiv:2012.04207 10.48550\/arxiv.2012.04207 Sosuke Kobayashi, Sho Yokoi, Jun Suzuki, and Kentaro Inui. 2020. Efficient Estimation of Influence of a Training Instance. Journal of Natural Language Processing 28, 2 (dec 2020), 573\u2013597. https:\/\/doi.org\/10.48550\/arxiv.2012.04207 arxiv:2012.04207","journal-title":"Journal of Natural Language Processing"},{"key":"e_1_3_2_1_61_1","volume-title":"On the Accuracy of Influence Functions for Measuring Group Effects. In NIPS\u201919: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 5254\u20135264","author":"Koh Pang Wei","year":"2019","unstructured":"Pang Wei Koh , Kai-Siang Ang , Hubert H K Teo , and Percy Liang . 2019 . On the Accuracy of Influence Functions for Measuring Group Effects. In NIPS\u201919: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 5254\u20135264 . https:\/\/doi.org\/10.5555\/3454287.3454759 10.5555\/3454287.3454759 Pang Wei Koh, Kai-Siang Ang, Hubert H K Teo, and Percy Liang. 2019. On the Accuracy of Influence Functions for Measuring Group Effects. In NIPS\u201919: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 5254\u20135264. https:\/\/doi.org\/10.5555\/3454287.3454759"},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.5555\/3305381.3305576"},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1007\/S10994-021-06119-Y"},{"key":"e_1_3_2_1_64_1","unstructured":"Ronny Kohavi and Barry Becker. 1996. Census Income Data Set. http:\/\/archive.ics.uci.edu\/ml\/datasets\/Adult Ronny Kohavi and Barry Becker. 1996. Census Income Data Set. http:\/\/archive.ics.uci.edu\/ml\/datasets\/Adult"},{"key":"e_1_3_2_1_65_1","unstructured":"Shuming Kong Yanyan Shen and Linpeng Huang. 2022. Resolving Training Biases via Influence-based Data Relabeling. In ICLR. Shuming Kong Yanyan Shen and Linpeng Huang. 2022. Resolving Training Biases via Influence-based Data Relabeling. In ICLR."},{"key":"e_1_3_2_1_66_1","volume-title":"The Web Conference 2018 - Proceedings of the World Wide Web Conference, WWW 2018 (apr 2018","author":"Krasanakis Emmanouil","year":"2018","unstructured":"Emmanouil Krasanakis , Eleftherios Spyromitros-Xioufis , Symeon Papadopoulos , and Yiannis Kompatsiaris . 2018 . Adaptive sensitive reweighting to mitigate bias in fairness-aware classification . The Web Conference 2018 - Proceedings of the World Wide Web Conference, WWW 2018 (apr 2018 ), 853\u2013862. https:\/\/doi.org\/10.1145\/3178876.3186133 10.1145\/3178876.3186133 Emmanouil Krasanakis, Eleftherios Spyromitros-Xioufis, Symeon Papadopoulos, and Yiannis Kompatsiaris. 2018. Adaptive sensitive reweighting to mitigate bias in fairness-aware classification. The Web Conference 2018 - Proceedings of the World Wide Web Conference, WWW 2018 (apr 2018), 853\u2013862. https:\/\/doi.org\/10.1145\/3178876.3186133"},{"key":"e_1_3_2_1_67_1","volume-title":"Advances in Neural Information Processing Systems 2017-Decem (mar","author":"Kusner Matt J.","year":"2017","unstructured":"Matt J. Kusner , Joshua R. Loftus , Chris Russell , and Ricardo Silva . 2017. Counterfactual Fairness . Advances in Neural Information Processing Systems 2017-Decem (mar 2017 ), 4067\u20134077. arxiv:1703.06856http:\/\/arxiv.org\/abs\/1703.06856 Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fairness. Advances in Neural Information Processing Systems 2017-Decem (mar 2017), 4067\u20134077. arxiv:1703.06856http:\/\/arxiv.org\/abs\/1703.06856"},{"key":"e_1_3_2_1_68_1","volume-title":"Proceedings of the International AAAI Conference on Web and Social Media 13 (jul","author":"Kyriakou Kyriakos","year":"2019","unstructured":"Kyriakos Kyriakou , P\u0131nar Barlas , Styliani Kleanthous , and Jahna Otterbacher . 2019 . Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images . Proceedings of the International AAAI Conference on Web and Social Media 13 (jul 2019), 313\u2013322. https:\/\/doi.org\/10.1609\/ICWSM.V13I01.3232 10.1609\/ICWSM.V13I01.3232 Kyriakos Kyriakou, P\u0131nar Barlas, Styliani Kleanthous, and Jahna Otterbacher. 2019. Fairness in Proprietary Image Tagging Algorithms: A Cross-Platform Audit on People Images. Proceedings of the International AAAI Conference on Web and Social Media 13 (jul 2019), 313\u2013322. https:\/\/doi.org\/10.1609\/ICWSM.V13I01.3232"},{"key":"e_1_3_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1287\/MNSC.2018.3093"},{"key":"e_1_3_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.31219\/osf.io\/uvjqh"},{"key":"e_1_3_2_1_71_1","volume-title":"A survey on datasets for fairness-aware machine learning. WIREs Data Mining Knowledge Discovery 12, 3","author":"Quy Tai Le","year":"2022","unstructured":"Tai Le Quy , Arjun Roy , Vasileios Iosifidis , Wenbin Zhang , and Eirini Ntoutsi . 2022. A survey on datasets for fairness-aware machine learning. WIREs Data Mining Knowledge Discovery 12, 3 ( 2022 ). https:\/\/doi.org\/10.1002\/widm.1452 10.1002\/widm.1452 Tai Le Quy, Arjun Roy, Vasileios Iosifidis, Wenbin Zhang, and Eirini Ntoutsi. 2022. A survey on datasets for fairness-aware machine learning. WIREs Data Mining Knowledge Discovery 12, 3 (2022). https:\/\/doi.org\/10.1002\/widm.1452"},{"key":"e_1_3_2_1_72_1","volume-title":"IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2019-May (dec 2018","author":"Lohia Pranay K.","year":"2018","unstructured":"Pranay K. Lohia , Karthikeyan Natesan Ramamurthy , Manish Bhide , Diptikalyan Saha , Kush R. Varshney , and Ruchir Puri . 2018 . Bias Mitigation Post-processing for Individual and Group Fairness. ICASSP , IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2019-May (dec 2018 ), 2847\u20132851. arxiv:1812.06135http:\/\/arxiv.org\/abs\/1812.06135 Pranay K. Lohia, Karthikeyan Natesan Ramamurthy, Manish Bhide, Diptikalyan Saha, Kush R. Varshney, and Ruchir Puri. 2018. Bias Mitigation Post-processing for Individual and Group Fairness. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings 2019-May (dec 2018), 2847\u20132851. arxiv:1812.06135http:\/\/arxiv.org\/abs\/1812.06135"},{"key":"e_1_3_2_1_73_1","volume-title":"Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM Press","author":"Luong Binh Thanh","year":"2011","unstructured":"Binh Thanh Luong , Salvatore Ruggieri , and Franco Turini . 2011 . k-NN as an implementation of situation testing for discrimination discovery and prevention . In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM Press , New York, New York, USA, 502\u2013510. https:\/\/doi.org\/10.1145\/ 2020408.2020488 10.1145\/2020408.2020488 Binh Thanh Luong, Salvatore Ruggieri, and Franco Turini. 2011. k-NN as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM Press, New York, New York, USA, 502\u2013510. https:\/\/doi.org\/10.1145\/2020408.2020488"},{"volume-title":"Detecting Extrapolation with Influence Functions. In ICML Workshop.","author":"Madras David","key":"e_1_3_2_1_74_1","unstructured":"David Madras , James Atwood , and A. D\u2019Amour . 2019 . Detecting Extrapolation with Influence Functions. In ICML Workshop. David Madras, James Atwood, and A. D\u2019Amour. 2019. Detecting Extrapolation with Influence Functions. In ICML Workshop."},{"key":"e_1_3_2_1_75_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372865"},{"key":"e_1_3_2_1_76_1","unstructured":"Ninareh Mehrabi Fred Morstatter Nripsuta Saxena Kristina Lerman and Aram Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning. (2019). arxiv:1908.09635v2https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing Ninareh Mehrabi Fred Morstatter Nripsuta Saxena Kristina Lerman and Aram Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning. (2019). arxiv:1908.09635v2https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing"},{"key":"e_1_3_2_1_77_1","first-page":"3","article-title":"Efficient and effective data imputation with influence functions","volume":"15","author":"Miao Xiaoye","year":"2021","unstructured":"Xiaoye Miao , Yangyang Wu , Lu Chen , Yunjun Gao , Jun Wang , and Jianwei Yin . 2021 . Efficient and effective data imputation with influence functions . Proceedings of the VLDB Endowment 15 , 3 (nov 2021), 624\u2013632. https:\/\/doi.org\/10.14778\/3494124.3494143 10.14778\/3494124.3494143 Xiaoye Miao, Yangyang Wu, Lu Chen, Yunjun Gao, Jun Wang, and Jianwei Yin. 2021. Efficient and effective data imputation with influence functions. Proceedings of the VLDB Endowment 15, 3 (nov 2021), 624\u2013632. https:\/\/doi.org\/10.14778\/3494124.3494143","journal-title":"Proceedings of the VLDB Endowment"},{"volume-title":"Algorithms of Oppression: How Search Engines Reinforce Racism","author":"Noble Safiya Umoja","key":"e_1_3_2_1_78_1","unstructured":"Safiya Umoja Noble . 2018. Algorithms of Oppression: How Search Engines Reinforce Racism ( first ed.). NYU Press . https:\/\/doi.org\/10.2307\/j.ctt1pwt9w5 10.2307\/j.ctt1pwt9w5 Safiya Umoja Noble. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism (first ed.). NYU Press. https:\/\/doi.org\/10.2307\/j.ctt1pwt9w5"},{"key":"e_1_3_2_1_79_1","volume-title":"Marco Yu Chak Yan, Daniel Shu Wei Ting, Jialiang Li, Charumathi Sabanayagam, Tien Yin Wong, and Ching Yu Cheng.","author":"Nusinovici Simon","year":"2020","unstructured":"Simon Nusinovici , Yih Chung Tham , Marco Yu Chak Yan, Daniel Shu Wei Ting, Jialiang Li, Charumathi Sabanayagam, Tien Yin Wong, and Ching Yu Cheng. 2020 . Logistic regression was as good as machine learning for predicting major chronic diseases. Journal of Clinical Epidemiology 122 (jun 2020), 56\u201369. https:\/\/doi.org\/10.1016\/J.JCLINEPI.2020.03.002 10.1016\/J.JCLINEPI.2020.03.002 Simon Nusinovici, Yih Chung Tham, Marco Yu Chak Yan, Daniel Shu Wei Ting, Jialiang Li, Charumathi Sabanayagam, Tien Yin Wong, and Ching Yu Cheng. 2020. Logistic regression was as good as machine learning for predicting major chronic diseases. Journal of Clinical Epidemiology 122 (jun 2020), 56\u201369. https:\/\/doi.org\/10.1016\/J.JCLINEPI.2020.03.002"},{"key":"e_1_3_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287593"},{"key":"#cr-split#-e_1_3_2_1_81_1.1","doi-asserted-by":"crossref","unstructured":"Will Orr and Jenny L Davis. 2020. Attributions of ethical responsibility by Artificial Intelligence practitioners. (2020). https:\/\/doi.org\/10.1080\/1369118X.2020.1713842 10.1080\/1369118X.2020.1713842","DOI":"10.1080\/1369118X.2020.1713842"},{"key":"#cr-split#-e_1_3_2_1_81_1.2","doi-asserted-by":"crossref","unstructured":"Will Orr and Jenny L Davis. 2020. Attributions of ethical responsibility by Artificial Intelligence practitioners. (2020). https:\/\/doi.org\/10.1080\/1369118X.2020.1713842","DOI":"10.1080\/1369118X.2020.1713842"},{"key":"e_1_3_2_1_82_1","first-page":"5","article-title":"FairLens: Auditing black-box clinical decision support systems","volume":"58","author":"Panigutti Cecilia","year":"2021","unstructured":"Cecilia Panigutti , Alan Perotti , Andr\u00e9 Panisson , Paolo Bajardi , and Dino Pedreschi . 2021 . FairLens: Auditing black-box clinical decision support systems . Information Processing & Management 58 , 5 (sep 2021), 102657. https:\/\/doi.org\/10.1016\/J.IPM.2021.102657 arxiv:2011.04049 10.1016\/J.IPM.2021.102657 Cecilia Panigutti, Alan Perotti, Andr\u00e9 Panisson, Paolo Bajardi, and Dino Pedreschi. 2021. FairLens: Auditing black-box clinical decision support systems. Information Processing & Management 58, 5 (sep 2021), 102657. https:\/\/doi.org\/10.1016\/J.IPM.2021.102657 arxiv:2011.04049","journal-title":"Information Processing & Management"},{"key":"e_1_3_2_1_83_1","first-page":"3","article-title":"A Review on Fairness in Machine Learning","volume":"55","author":"Pessach Dana","year":"2022","unstructured":"Dana Pessach and Erez Shmueli . 2022 . A Review on Fairness in Machine Learning . ACM Computing Surveys (CSUR) 55 , 3 (feb 2022), 1\u201344. https:\/\/doi.org\/10.1145\/3494672 10.1145\/3494672 Dana Pessach and Erez Shmueli. 2022. A Review on Fairness in Machine Learning. ACM Computing Surveys (CSUR) 55, 3 (feb 2022), 1\u201344. https:\/\/doi.org\/10.1145\/3494672","journal-title":"ACM Computing Surveys (CSUR)"},{"key":"e_1_3_2_1_84_1","first-page":"25944","article-title":"Post-processing for individual fairness","volume":"34","author":"Petersen Felix","year":"2021","unstructured":"Felix Petersen , Debarghya Mukherjee , Yuekai Sun , and Mikhail Yurochkin . 2021 . Post-processing for individual fairness . Advances in Neural Information Processing Systems 34 (2021), 25944 \u2013 25955 . Felix Petersen, Debarghya Mukherjee, Yuekai Sun, and Mikhail Yurochkin. 2021. Post-processing for individual fairness. Advances in Neural Information Processing Systems 34 (2021), 25944\u201325955.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_85_1","volume-title":"On Fairness and Calibration. In 31st Conference on Neural Information Processing Systems. Neural information processing systems foundation, 5681\u20135690","author":"Pleiss Geoff","year":"2012","unstructured":"Geoff Pleiss , Manish Raghavan , Felix Wu , Jon Kleinberg , and Kilian Q. Weinberger . 2017 . On Fairness and Calibration. In 31st Conference on Neural Information Processing Systems. Neural information processing systems foundation, 5681\u20135690 . arxiv:1709.0 2012 http:\/\/arxiv.org\/abs\/1709.02012 Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. On Fairness and Calibration. In 31st Conference on Neural Information Processing Systems. Neural information processing systems foundation, 5681\u20135690. arxiv:1709.02012http:\/\/arxiv.org\/abs\/1709.02012"},{"volume-title":"The Persona Lifecycle: Keeping People in Mind Throughout Product Design","author":"Pruitt John","key":"e_1_3_2_1_86_1","unstructured":"John Pruitt and Tamara Adlin . 2005. The Persona Lifecycle: Keeping People in Mind Throughout Product Design . Morgan Kaufmann Publishers Inc ., San Francisco, CA, USA. John Pruitt and Tamara Adlin. 2005. The Persona Lifecycle: Keeping People in Mind Throughout Product Design. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA."},{"key":"e_1_3_2_1_87_1","volume-title":"34th Internation Conference on Neural Information Processing Systems. 19920\u201319930","author":"Pruthi Garima","year":"2020","unstructured":"Garima Pruthi , Frederick Liu , Satyen Kale , and Mukund Sundararajan . 2020 . Estimating Training Data Influence by Tracing Gradient Descent . In 34th Internation Conference on Neural Information Processing Systems. 19920\u201319930 . https:\/\/doi.org\/10.5555\/3495724.3497396 10.5555\/3495724.3497396 Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating Training Data Influence by Tracing Gradient Descent. In 34th Internation Conference on Neural Information Processing Systems. 19920\u201319930. https:\/\/doi.org\/10.5555\/3495724.3497396"},{"key":"e_1_3_2_1_88_1","volume-title":"Conference on Human Factors in Computing Systems - Proceedings (may","author":"Richardson Brianna","year":"2021","unstructured":"Brianna Richardson , Jean Garcia-Gathright , and Samuel F. Way . 2021. Towards fairness in practice: A practitioner-oriented rubric for evaluating fair ml toolkits . Conference on Human Factors in Computing Systems - Proceedings (may 2021 ). https:\/\/doi.org\/10.1145\/3411764.3445604 10.1145\/3411764.3445604 Brianna Richardson, Jean Garcia-Gathright, and Samuel F. Way. 2021. Towards fairness in practice: A practitioner-oriented rubric for evaluating fair ml toolkits. Conference on Human Factors in Computing Systems - Proceedings (may 2021). https:\/\/doi.org\/10.1145\/3411764.3445604"},{"key":"e_1_3_2_1_89_1","volume-title":"Gilbert","author":"Richardson Brianna","year":"2021","unstructured":"Brianna Richardson and Juan E . Gilbert . 2021 . A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions . (dec 2021). https:\/\/doi.org\/10.48550\/arxiv.2112.05700 arxiv:2112.05700 10.48550\/arxiv.2112.05700 Brianna Richardson and Juan E. Gilbert. 2021. A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions. (dec 2021). https:\/\/doi.org\/10.48550\/arxiv.2112.05700 arxiv:2112.05700"},{"key":"e_1_3_2_1_90_1","volume-title":"Gian Marco Conte, and Bradley J. Erickson","author":"Rouzrokh Pouria","year":"2022","unstructured":"Pouria Rouzrokh , Bardia Khosravi , Shahriar Faghani , Mana Moassefi , Diana V. Vera Garcia , Yashbir Singh , Kuan Zhang , Gian Marco Conte, and Bradley J. Erickson . 2022 . Mitigating Bias in Radiology Machine Learning: 1. Data Handling . https:\/\/doi.org\/10.1148\/ryai.210290 4, 5 (aug 2022). https:\/\/doi.org\/10.1148\/RYAI.210290 10.1148\/ryai.210290 Pouria Rouzrokh, Bardia Khosravi, Shahriar Faghani, Mana Moassefi, Diana V.Vera Garcia, Yashbir Singh, Kuan Zhang, Gian Marco Conte, and Bradley J. Erickson. 2022. Mitigating Bias in Radiology Machine Learning: 1. Data Handling. https:\/\/doi.org\/10.1148\/ryai.210290 4, 5 (aug 2022). https:\/\/doi.org\/10.1148\/RYAI.210290"},{"key":"e_1_3_2_1_91_1","volume-title":"Proceedings - International Conference on Distributed Computing Systems (2011","author":"Satop\u00e4\u00e4 Ville","year":"2011","unstructured":"Ville Satop\u00e4\u00e4 , Jeannie Albrecht , David Irwin , and Barath Raghavan . 2011 . Finding a \"kneedle\" in a haystack: Detecting knee points in system behavior . Proceedings - International Conference on Distributed Computing Systems (2011 ), 166\u2013171. https:\/\/doi.org\/10.1109\/ICDCSW.2011.20 10.1109\/ICDCSW.2011.20 Ville Satop\u00e4\u00e4, Jeannie Albrecht, David Irwin, and Barath Raghavan. 2011. Finding a \"kneedle\" in a haystack: Detecting knee points in system behavior. Proceedings - International Conference on Distributed Computing Systems (2011), 166\u2013171. https:\/\/doi.org\/10.1109\/ICDCSW.2011.20"},{"key":"e_1_3_2_1_92_1","volume-title":"Varshney","author":"Sattigeri Prasanna","year":"2022","unstructured":"Prasanna Sattigeri , Soumya Ghosh , Inkit Padhi , Pierre Dognin , and Kush R . Varshney . 2022 . Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting. In Advances in Neural Information Processing Systems . https:\/\/doi.org\/10.48550\/arxiv.2212.06803 arxiv:2212.06803 10.48550\/arxiv.2212.06803 Prasanna Sattigeri, Soumya Ghosh, Inkit Padhi, Pierre Dognin, and Kush R. Varshney. 2022. Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting. In Advances in Neural Information Processing Systems. https:\/\/doi.org\/10.48550\/arxiv.2212.06803 arxiv:2212.06803"},{"key":"e_1_3_2_1_93_1","volume-title":"Utilities and the Issue of Fairness in a Decision Theoretic Model for Selection. Journal of Educational Measurement 13 (09","author":"Sawyer Richard","year":"1976","unstructured":"Richard Sawyer , Nancy Cole , and James Cole . 1976. Utilities and the Issue of Fairness in a Decision Theoretic Model for Selection. Journal of Educational Measurement 13 (09 1976 ), 59 \u2013 76. https:\/\/doi.org\/10.1111\/j.1745-3984.1976.tb00182.x 10.1111\/j.1745-3984.1976.tb00182.x Richard Sawyer, Nancy Cole, and James Cole. 1976. Utilities and the Issue of Fairness in a Decision Theoretic Model for Selection. Journal of Educational Measurement 13 (09 1976), 59 \u2013 76. https:\/\/doi.org\/10.1111\/j.1745-3984.1976.tb00182.x"},{"key":"e_1_3_2_1_94_1","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence 36","author":"Schioppa Andrea","year":"2021","unstructured":"Andrea Schioppa , Polina Zablotskaia , David Vilar , and Artem Sokolov . 2021 . Scaling Up Influence Functions . Proceedings of the AAAI Conference on Artificial Intelligence 36 , 8 (dec 2021), 8179\u20138186. https:\/\/doi.org\/10.48550\/arxiv.2112.03052 arxiv:2112.03052 10.48550\/arxiv.2112.03052 Andrea Schioppa, Polina Zablotskaia, David Vilar, and Artem Sokolov. 2021. Scaling Up Influence Functions. Proceedings of the AAAI Conference on Artificial Intelligence 36, 8 (dec 2021), 8179\u20138186. https:\/\/doi.org\/10.48550\/arxiv.2112.03052 arxiv:2112.03052"},{"key":"e_1_3_2_1_95_1","volume-title":"2021 Conference on Computer Supported Cooperative Work and Social Computing (CSCW \u201921 Companion)","volume":"1","author":"Schoeffer Jakob","year":"2021","unstructured":"Jakob Schoeffer and Niklas Kuehl . 2021 . Appropriate Fairness Perceptions? On the Effectiveness of Explanations in Enabling People to Assess the Fairness of Automated Decision Systems . In 2021 Conference on Computer Supported Cooperative Work and Social Computing (CSCW \u201921 Companion) , Vol. 1 . Association for Computing Machinery. https:\/\/doi.org\/10.1145\/3462204.3481742 arxiv:2108.06500 10.1145\/3462204.3481742 Jakob Schoeffer and Niklas Kuehl. 2021. Appropriate Fairness Perceptions? On the Effectiveness of Explanations in Enabling People to Assess the Fairness of Automated Decision Systems. In 2021 Conference on Computer Supported Cooperative Work and Social Computing (CSCW \u201921 Companion), Vol. 1. Association for Computing Machinery. https:\/\/doi.org\/10.1145\/3462204.3481742 arxiv:2108.06500"},{"key":"e_1_3_2_1_96_1","volume-title":"Auditing Pointwise Reliability After Learning. AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics (jan 2019","author":"Schulam Peter","year":"2019","unstructured":"Peter Schulam and Suchi Saria . 2019 . Can You Trust This Prediction? Auditing Pointwise Reliability After Learning. AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics (jan 2019 ). https:\/\/doi.org\/10.48550\/arxiv.1901.00403 arxiv:1901.00403 10.48550\/arxiv.1901.00403 Peter Schulam and Suchi Saria. 2019. Can You Trust This Prediction? Auditing Pointwise Reliability After Learning. AISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics (jan 2019). https:\/\/doi.org\/10.48550\/arxiv.1901.00403 arxiv:1901.00403"},{"volume-title":"FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency","author":"Selbst Andrew D.","key":"e_1_3_2_1_97_1","unstructured":"Andrew D. Selbst , Danah Boyd , Sorelle A. Friedler , Suresh Venkatasubramanian , and Janet Vertesi . 2019. Fairness and abstraction in sociotechnical systems . In FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency . Association for Computing Machinery, Inc , New York, NY, USA , 59\u201368. https:\/\/doi.org\/10.1145\/3287560.3287598 10.1145\/3287560.3287598 Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, Inc, New York, NY, USA, 59\u201368. https:\/\/doi.org\/10.1145\/3287560.3287598"},{"key":"e_1_3_2_1_98_1","volume-title":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM. https:\/\/doi.org\/10","author":"Seng Michelle","year":"2021","unstructured":"Michelle Seng , Ah Lee , and Jatinder Singh . 2021 . The Landscape and Gaps in Open Source Fairness Toolkits . In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM. https:\/\/doi.org\/10 .1145\/3411764.3445261 10.1145\/3411764.3445261 Michelle Seng, Ah Lee, and Jatinder Singh. 2021. The Landscape and Gaps in Open Source Fairness Toolkits. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM. https:\/\/doi.org\/10.1145\/3411764.3445261"},{"key":"e_1_3_2_1_99_1","volume-title":"Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society (Feb.","author":"Sharma Shubham","year":"2020","unstructured":"Shubham Sharma , Yunfeng Zhang , Jesus M. Rios Aliaga , Djallel Bouneffouf , Vinod Muthusamy , and Kush R. Varshney . 2020. Data augmentation for discrimination prevention and bias disambiguation . Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society (Feb. 2020 ), 358\u2013364. https:\/\/doi.org\/10.1145\/3375627.3375865 10.1145\/3375627.3375865 Shubham Sharma, Yunfeng Zhang, Jesus M. Rios Aliaga, Djallel Bouneffouf, Vinod Muthusamy, and Kush R. Varshney. 2020. Data augmentation for discrimination prevention and bias disambiguation. Proceedings of the AAAI\/ACM Conference on AI, Ethics, and Society (Feb. 2020), 358\u2013364. https:\/\/doi.org\/10.1145\/3375627.3375865"},{"key":"e_1_3_2_1_100_1","volume-title":"the 25th International Conference on Artificial Intelligence and Statistics (AISTATS)","author":"Silva Andrew","year":"2022","unstructured":"Andrew Silva , Rohit Chopra , and Matthew Gombolay . 2022 . Cross-Loss Influence Functions to Explain Deep Network Representations . In the 25th International Conference on Artificial Intelligence and Statistics (AISTATS) . Valencia, Spain. https:\/\/www.researchgate.net\/publication\/346614859_Using_Cross-Loss_Influence_Functions_to_Explain_Deep_Network_Representations Andrew Silva, Rohit Chopra, and Matthew Gombolay. 2022. Cross-Loss Influence Functions to Explain Deep Network Representations. In the 25th International Conference on Artificial Intelligence and Statistics (AISTATS). Valencia, Spain. https:\/\/www.researchgate.net\/publication\/346614859_Using_Cross-Loss_Influence_Functions_to_Explain_Deep_Network_Representations"},{"key":"e_1_3_2_1_101_1","volume-title":"Pang Wei Koh, and Percy Liang","author":"Steinhardt Jacob","year":"2017","unstructured":"Jacob Steinhardt , Pang Wei Koh, and Percy Liang . 2017 . Certified Defenses for Data Poisoning Attacks. Advances in Neural Information Processing Systems 2017-December (jun 2017), 3518\u20133530. https:\/\/doi.org\/10.48550\/arxiv.1706.03691 arxiv:1706.03691 10.48550\/arxiv.1706.03691 Jacob Steinhardt, Pang Wei Koh, and Percy Liang. 2017. Certified Defenses for Data Poisoning Attacks. Advances in Neural Information Processing Systems 2017-December (jun 2017), 3518\u20133530. https:\/\/doi.org\/10.48550\/arxiv.1706.03691 arxiv:1706.03691"},{"key":"e_1_3_2_1_102_1","volume-title":"Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. AIES 2018 - Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society (dec","author":"Tan Sarah","year":"2018","unstructured":"Sarah Tan , Rich Caruana , Giles Hooker , and Yin Lou . 2018 . Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. AIES 2018 - Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society (dec 2018), 303\u2013310. https:\/\/doi.org\/10.1145\/3278721.3278725 arxiv:1710.06169 10.1145\/3278721.3278725 Sarah Tan, Rich Caruana, Giles Hooker, and Yin Lou. 2018. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. AIES 2018 - Proceedings of the 2018 AAAI\/ACM Conference on AI, Ethics, and Society (dec 2018), 303\u2013310. https:\/\/doi.org\/10.1145\/3278721.3278725 arxiv:1710.06169"},{"volume-title":"Trustworthy Machine Learning. Independently Published","author":"Varshney Kush R.","key":"e_1_3_2_1_103_1","unstructured":"Kush R. Varshney . 2022. Trustworthy Machine Learning. Independently Published , Chappaqua, NY, USA . Kush R. Varshney. 2022. Trustworthy Machine Learning. Independently Published, Chappaqua, NY, USA."},{"key":"e_1_3_2_1_104_1","volume-title":"Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data","author":"Veale Michael","year":"2017","unstructured":"Michael Veale and Reuben Binns . 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data . Big Data & Society 4, 2 ( 2017 ). https:\/\/doi.org\/10.1177\/2053951717743530 10.1177\/2053951717743530 Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2 (2017). https:\/\/doi.org\/10.1177\/2053951717743530"},{"key":"e_1_3_2_1_105_1","volume-title":"Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Conference on Human Factors in Computing Systems - Proceedings 2018-April (feb 2018","author":"Veale Michael","year":"2018","unstructured":"Michael Veale , Max Van Kleek , and Reuben Binns . 2018 . Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Conference on Human Factors in Computing Systems - Proceedings 2018-April (feb 2018 ). https:\/\/doi.org\/10.1145\/3173574.3174014 arxiv:1802.01029 10.1145\/3173574.3174014 Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. Conference on Human Factors in Computing Systems - Proceedings 2018-April (feb 2018). https:\/\/doi.org\/10.1145\/3173574.3174014 arxiv:1802.01029"},{"key":"e_1_3_2_1_106_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.apergo.2013.03.012"},{"key":"e_1_3_2_1_107_1","volume-title":"Proceedings of the 36th International Conference on Machine Learning. http:\/\/github.com\/ustunb\/ctfdist.","author":"Wang Hao","year":"2019","unstructured":"Hao Wang , Berk Ustun , and Flavio P Calmon . 2019 . Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions . In Proceedings of the 36th International Conference on Machine Learning. http:\/\/github.com\/ustunb\/ctfdist. Hao Wang, Berk Ustun, and Flavio P Calmon. 2019. Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions. In Proceedings of the 36th International Conference on Machine Learning. http:\/\/github.com\/ustunb\/ctfdist."},{"key":"e_1_3_2_1_108_1","volume-title":"Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation. In Conference on Computer Vision and Pattern Recognition. 8916\u20138925","author":"Wang Zeyu","year":"2020","unstructured":"Zeyu Wang , Klint Qinami , Ioannis Christos Karakozis , Kyle Genova , Prem Nair , Kenji Hata , and Olga Russakovsky . 2020 . Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation. In Conference on Computer Vision and Pattern Recognition. 8916\u20138925 . arxiv:1911.11834 Zeyu Wang, Klint Qinami, Ioannis Christos Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Russakovsky. 2020. Towards Fairness in Visual Recognition: Effective Strategies for Bias Mitigation. In Conference on Computer Vision and Pattern Recognition. 8916\u20138925. arxiv:1911.11834"},{"key":"e_1_3_2_1_109_1","doi-asserted-by":"crossref","first-page":"26","DOI":"10.1080\/00913367.2020.1821411","article-title":"Uncovering the Sources of Machine-Learning Mistakes in Advertising: Contextual Bias in the Evaluation of Semantic Relatedness","volume":"50","author":"Watts Jameson","year":"2020","unstructured":"Jameson Watts and Anastasia Adriano . 2020 . Uncovering the Sources of Machine-Learning Mistakes in Advertising: Contextual Bias in the Evaluation of Semantic Relatedness . Journal of Advertising 50 , 1 (2020), 26 \u2013 38 . https:\/\/doi.org\/10.1080\/00913367.2020.1821411 10.1080\/00913367.2020.1821411 Jameson Watts and Anastasia Adriano. 2020. Uncovering the Sources of Machine-Learning Mistakes in Advertising: Contextual Bias in the Evaluation of Semantic Relatedness. Journal of Advertising 50, 1 (2020), 26\u201338. https:\/\/doi.org\/10.1080\/00913367.2020.1821411","journal-title":"Journal of Advertising"},{"key":"e_1_3_2_1_110_1","doi-asserted-by":"publisher","DOI":"10.1145\/1852102.1852106"},{"key":"e_1_3_2_1_111_1","series-title":"LSAC Research Report Series","volume-title":"LSAC National Longitudinal Bar Passage Study","author":"Wightman Linda F.","unstructured":"Linda F. Wightman . 1998. LSAC National Longitudinal Bar Passage Study . LSAC Research Report Series . Linda F. Wightman. 1998. LSAC National Longitudinal Bar Passage Study. LSAC Research Report Series."},{"key":"e_1_3_2_1_112_1","volume-title":"Predictive Inequity in Object Detection. (feb","author":"Wilson Benjamin","year":"2019","unstructured":"Benjamin Wilson , Judy Hoffman , and Jamie Morgenstern . 2019. Predictive Inequity in Object Detection. (feb 2019 ). arxiv:1902.11097http:\/\/arxiv.org\/abs\/1902.11097 Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. 2019. Predictive Inequity in Object Detection. (feb 2019). arxiv:1902.11097http:\/\/arxiv.org\/abs\/1902.11097"},{"key":"e_1_3_2_1_113_1","volume-title":"Gender Classification and Bias Mitigation in Facial Images. WebSci 2020 - Proceedings of the 12th ACM Conference on Web Science (jul","author":"Wu Wenying","year":"2020","unstructured":"Wenying Wu , Panagiotis Michalatos , Pavlos Protopapaps , and Zheng Yang . 2020 . Gender Classification and Bias Mitigation in Facial Images. WebSci 2020 - Proceedings of the 12th ACM Conference on Web Science (jul 2020), 106\u2013114. https:\/\/doi.org\/10.1145\/3394231.3397900 10.1145\/3394231.3397900 Wenying Wu, Panagiotis Michalatos, Pavlos Protopapaps, and Zheng Yang. 2020. Gender Classification and Bias Mitigation in Facial Images. WebSci 2020 - Proceedings of the 12th ACM Conference on Web Science (jul 2020), 106\u2013114. https:\/\/doi.org\/10.1145\/3394231.3397900"},{"key":"e_1_3_2_1_114_1","volume-title":"The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019 (may","author":"Wu Yongkai","year":"2019","unstructured":"Yongkai Wu , Lu Zhang , and Xintao Wu . 2019 . On convexity and bounds of fairness-aware classification . The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019 (may 2019), 3356\u20133362. https:\/\/doi.org\/10.1145\/3308558.3313723 10.1145\/3308558.3313723 Yongkai Wu, Lu Zhang, and Xintao Wu. 2019. On convexity and bounds of fairness-aware classification. The Web Conference 2019 - Proceedings of the World Wide Web Conference, WWW 2019 (may 2019), 3356\u20133362. https:\/\/doi.org\/10.1145\/3308558.3313723"},{"key":"e_1_3_2_1_115_1","volume-title":"A Human-in-the-loop Framework to Construct Context-aware Mathematical Notions of Outcome Fairness. (nov","author":"Yaghini Mohammad","year":"2019","unstructured":"Mohammad Yaghini , Andreas Krause , and Hoda Heidari . 2019. A Human-in-the-loop Framework to Construct Context-aware Mathematical Notions of Outcome Fairness. (nov 2019 ). arxiv:1911.03020http:\/\/arxiv.org\/abs\/1911.03020 Mohammad Yaghini, Andreas Krause, and Hoda Heidari. 2019. A Human-in-the-loop Framework to Construct Context-aware Mathematical Notions of Outcome Fairness. (nov 2019). arxiv:1911.03020http:\/\/arxiv.org\/abs\/1911.03020"},{"key":"e_1_3_2_1_116_1","volume-title":"International Conference on Information and Knowledge Management, Proceedings (oct 2020","author":"Yan Shen","year":"2020","unstructured":"Shen Yan , Hsien Te Kao , and Emilio Ferrara . 2020 . Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes . International Conference on Information and Knowledge Management, Proceedings (oct 2020 ), 1715\u20131724. https:\/\/doi.org\/10.1145\/3340531.3411980 10.1145\/3340531.3411980 Shen Yan, Hsien Te Kao, and Emilio Ferrara. 2020. Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes. International Conference on Information and Knowledge Management, Proceedings (oct 2020), 1715\u20131724. https:\/\/doi.org\/10.1145\/3340531.3411980"},{"key":"e_1_3_2_1_117_1","volume-title":"Leibniz International Proceedings in Informatics, LIPIcs 192 (jun 2020","author":"Yang Ke","year":"2020","unstructured":"Ke Yang , Joshua R. Loftus , and Julia Stoyanovich . 2020 . Causal intersectionality for fair ranking . Leibniz International Proceedings in Informatics, LIPIcs 192 (jun 2020 ). https:\/\/doi.org\/10.48550\/arxiv.2006.08688 arxiv:2006.08688 10.48550\/arxiv.2006.08688 Ke Yang, Joshua R. Loftus, and Julia Stoyanovich. 2020. Causal intersectionality for fair ranking. Leibniz International Proceedings in Informatics, LIPIcs 192 (jun 2020). https:\/\/doi.org\/10.48550\/arxiv.2006.08688 arxiv:2006.08688"},{"key":"e_1_3_2_1_118_1","volume-title":"Understanding Rare Spurious Correlations in Neural Networks. (feb","author":"Yang Yao-Yuan","year":"2022","unstructured":"Yao-Yuan Yang , Chi-Ning Chou , and Kamalika Chaudhuri . 2022. Understanding Rare Spurious Correlations in Neural Networks. (feb 2022 ). https:\/\/doi.org\/10.48550\/arxiv.2202.05189 arxiv:2202.05189 10.48550\/arxiv.2202.05189 Yao-Yuan Yang, Chi-Ning Chou, and Kamalika Chaudhuri. 2022. Understanding Rare Spurious Correlations in Neural Networks. (feb 2022). https:\/\/doi.org\/10.48550\/arxiv.2202.05189 arxiv:2202.05189"},{"key":"e_1_3_2_1_119_1","volume-title":"Influence Function for Unbiased Recommendation. SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (jul 2020)","author":"Yu Jiangxing","year":"2020","unstructured":"Jiangxing Yu , Hong Zhu , Chih Yao Chang , Xinhua Feng , Bowen Yuan , Xiuqiang He , and Zhenhua Dong . 2020 . Influence Function for Unbiased Recommendation. SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (jul 2020) , 1929\u20131932. https:\/\/doi.org\/10.1145\/3397271.3401321 10.1145\/3397271.3401321 Jiangxing Yu, Hong Zhu, Chih Yao Chang, Xinhua Feng, Bowen Yuan, Xiuqiang He, and Zhenhua Dong. 2020. Influence Function for Unbiased Recommendation. SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (jul 2020), 1929\u20131932. https:\/\/doi.org\/10.1145\/3397271.3401321"},{"key":"e_1_3_2_1_120_1","volume-title":"International Conference on Learning Representations.","author":"Yurochkin Mikhail","year":"2020","unstructured":"Mikhail Yurochkin , Amanda Bower , and Yuekai Sun . 2020 . Training individually fair ML models with sensitive subspace robustness . In International Conference on Learning Representations. Mikhail Yurochkin, Amanda Bower, and Yuekai Sun. 2020. Training individually fair ML models with sensitive subspace robustness. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_121_1","volume-title":"Proceedings of the 30th International Conference on Machine Learning.","author":"Zemel Richard","year":"2013","unstructured":"Richard Zemel , Yu Wu , Kevin Swersky , Toniann Pitassi , and Cynthia Dwork . 2013 . Learning Fair Representations . In Proceedings of the 30th International Conference on Machine Learning. Richard Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013. Learning Fair Representations. In Proceedings of the 30th International Conference on Machine Learning."},{"key":"e_1_3_2_1_122_1","volume-title":"Proceedings of the AAAI Conference on Artificial Intelligence 36","author":"Zhang Rui","year":"2022","unstructured":"Rui Zhang and Shihua Zhang . 2022 . Rethinking Influence Functions of Neural Networks in the Over-Parameterized Regime . Proceedings of the AAAI Conference on Artificial Intelligence 36 , 8 (2022), 9082\u20139090. https:\/\/doi.org\/10.1609\/aaai.v36i8.20893 arxiv:2112.08297 10.1609\/aaai.v36i8.20893 Rui Zhang and Shihua Zhang. 2022. Rethinking Influence Functions of Neural Networks in the Over-Parameterized Regime. Proceedings of the AAAI Conference on Artificial Intelligence 36, 8 (2022), 9082\u20139090. https:\/\/doi.org\/10.1609\/aaai.v36i8.20893 arxiv:2112.08297"},{"key":"e_1_3_2_1_123_1","volume-title":"Towards Fair Classifiers Without Sensitive Attributes: Exploring Biases in Related Features. WSDM 2022 - Proceedings of the 15th ACM International Conference on Web Search and Data Mining (apr","author":"Zhao Tianxiang","year":"2021","unstructured":"Tianxiang Zhao , Enyan Dai , Kai Shu , and Suhang Wang . 2021 . Towards Fair Classifiers Without Sensitive Attributes: Exploring Biases in Related Features. WSDM 2022 - Proceedings of the 15th ACM International Conference on Web Search and Data Mining (apr 2021), 1433\u20131442. https:\/\/doi.org\/10.1145\/3488560.3498493 arxiv:2104.14537v4 10.1145\/3488560.3498493 Tianxiang Zhao, Enyan Dai, Kai Shu, and Suhang Wang. 2021. Towards Fair Classifiers Without Sensitive Attributes: Exploring Biases in Related Features. WSDM 2022 - Proceedings of the 15th ACM International Conference on Web Search and Data Mining (apr 2021), 1433\u20131442. https:\/\/doi.org\/10.1145\/3488560.3498493 arxiv:2104.14537v4"}],"event":{"name":"FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency","acronym":"FAccT '23","location":"Chicago IL USA"},"container-title":["2023 ACM Conference on Fairness, Accountability, and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3593013.3594039","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,6,12]],"date-time":"2023-06-12T12:07:06Z","timestamp":1686571626000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3593013.3594039"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,12]]},"references-count":120,"alternative-id":["10.1145\/3593013.3594039","10.1145\/3593013"],"URL":"https:\/\/doi.org\/10.1145\/3593013.3594039","relation":{},"subject":[],"published":{"date-parts":[[2023,6,12]]},"assertion":[{"value":"2023-06-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}