{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T01:05:44Z","timestamp":1740099944555,"version":"3.37.3"},"publisher-location":"New York, NY, USA","reference-count":55,"publisher":"ACM","funder":[{"DOI":"10.13039\/100014895","name":"Open Philanthropy Project","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100014895","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,3,3]]},"DOI":"10.1145\/3442188.3445883","type":"proceedings-article","created":{"date-parts":[[2021,3,3]],"date-time":"2021-03-03T01:26:24Z","timestamp":1614734784000},"page":"196-205","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately"],"prefix":"10.1145","author":[{"given":"Fereshte","family":"Khani","sequence":"first","affiliation":[{"name":"Stanford University"}]},{"given":"Percy","family":"Liang","sequence":"additional","affiliation":[{"name":"Stanford University"}]}],"member":"320","published-online":{"date-parts":[[2021,3]]},"reference":[{"key":"e_1_3_2_2_1_1","first-page":"60","volume-title":"International Conference on Machine Learning (ICML)","author":"Agarwal Alekh","year":"2018","unstructured":"Alekh Agarwal , Alina Beygelzimer , Miroslav Dudik , John Langford , and Hanna Wallach . A reductions approach to fair classification . In International Conference on Machine Learning (ICML) , pages 60 -- 69 , 2018 . Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. A reductions approach to fair classification. In International Conference on Machine Learning (ICML), pages 60--69, 2018."},{"key":"e_1_3_2_2_2_1","volume-title":"Benign overfitting in linear regression. arXiv","author":"Bartlett Peter L.","year":"2019","unstructured":"Peter L. Bartlett , Philip M. Long , Gabor Lugosi , and Alexander Tsigler . Benign overfitting in linear regression. arXiv , 2019 . Peter L. Bartlett, Philip M. Long, Gabor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. arXiv, 2019."},{"key":"e_1_3_2_2_3_1","volume-title":"Two models of double descent for weak features. arXiv","author":"Belkin Mikhail","year":"2019","unstructured":"Mikhail Belkin , Daniel Hsu , and Ji Xu . Two models of double descent for weak features. arXiv , 2019 . Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. arXiv, 2019."},{"key":"e_1_3_2_2_4_1","volume-title":"H Chi. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075","author":"Beutel Alex","year":"2017","unstructured":"Alex Beutel , Jilin Chen , Zhe Zhao , and Ed H Chi. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075 , 2017 . Alex Beutel, Jilin Chen, Zhe Zhao, and Ed H Chi. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075, 2017."},{"key":"e_1_3_2_2_5_1","first-page":"4349","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Bolukbasi Tolga","year":"2016","unstructured":"Tolga Bolukbasi , Kai-Wei Chang , James Y Zou , Venkatesh Saligrama , and Adam T Kalai . Man is to computer programmer as woman is to homemaker? debiasing word embeddings . In Advances in Neural Information Processing Systems (NeurIPS) , pages 4349 -- 4357 , 2016 . Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems (NeurIPS), pages 4349--4357, 2016."},{"key":"e_1_3_2_2_6_1","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Carmon Yair","year":"2019","unstructured":"Yair Carmon , Aditi Raghunathan , Ludwig Schmidt , Percy Liang , and John C. Duchi . Unlabeled data improves adversarial robustness . In Advances in Neural Information Processing Systems (NeurIPS) , 2019 . Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, Percy Liang, and John C. Duchi. Unlabeled data improves adversarial robustness. In Advances in Neural Information Processing Systems (NeurIPS), 2019."},{"key":"e_1_3_2_2_7_1","first-page":"7801","volume-title":"Path-specific counterfactual fairness","author":"Chiappa Silvia","year":"2019","unstructured":"Silvia Chiappa . Path-specific counterfactual fairness . In Association for the Advancement of Artificial Intelligence (AAAI), volume 33 , pages 7801 -- 7808 , 2019 . Silvia Chiappa. Path-specific counterfactual fairness. In Association for the Advancement of Artificial Intelligence (AAAI), volume 33, pages 7801--7808, 2019."},{"key":"e_1_3_2_2_8_1","volume-title":"Flexibly fair representation learning by disentanglement. arXiv preprint arXiv:1906.02589","author":"Creager Elliot","year":"2019","unstructured":"Elliot Creager , David Madras , J\u00f6rn-Henrik Jacobsen , Marissa A Weis , Kevin Swersky , Toniann Pitassi , and Richard Zemel . Flexibly fair representation learning by disentanglement. arXiv preprint arXiv:1906.02589 , 2019 . Elliot Creager, David Madras, J\u00f6rn-Henrik Jacobsen, Marissa A Weis, Kevin Swersky, Toniann Pitassi, and Richard Zemel. Flexibly fair representation learning by disentanglement. arXiv preprint arXiv:1906.02589, 2019."},{"key":"e_1_3_2_2_9_1","first-page":"67","volume-title":"Measuring and mitigating unintended bias in text classification","author":"Dixon Lucas","year":"2018","unstructured":"Lucas Dixon , John Li , Jeffrey Sorensen , Nithum Thain , and Lucy Vasserman . Measuring and mitigating unintended bias in text classification . In Association for the Advancement of Artificial Intelligence (AAAI), pages 67 -- 73 , 2018 . Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Measuring and mitigating unintended bias in text classification. In Association for the Advancement of Artificial Intelligence (AAAI), pages 67--73, 2018."},{"key":"e_1_3_2_2_10_1","volume-title":"International Conference on Machine Learning (ICML)","author":"Dutta Sanghamitra","year":"2020","unstructured":"Sanghamitra Dutta , Dennis Wei , Hazar Yueksel , Pin-Yu Chen , Sijia Liu , and Kush R. Varshney . Is there a trade-off between fairness and accuracy? a perspective using mismatched hypothesis testing . In International Conference on Machine Learning (ICML) , 2020 . Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, and Kush R. Varshney. Is there a trade-off between fairness and accuracy? a perspective using mismatched hypothesis testing. In International Conference on Machine Learning (ICML), 2020."},{"key":"e_1_3_2_2_11_1","volume-title":"Moving object detection in spatial domain using background removal techniques-state-of-art. Recent patents on computer science, 1(1):32--54","author":"Elhabian Shireen Y","year":"2008","unstructured":"Shireen Y Elhabian , Khaled M El-Sayed , and Sumaya H Ahmed . Moving object detection in spatial domain using background removal techniques-state-of-art. Recent patents on computer science, 1(1):32--54 , 2008 . Shireen Y Elhabian, Khaled M El-Sayed, and Sumaya H Ahmed. Moving object detection in spatial domain using background removal techniques-state-of-art. Recent patents on computer science, 1(1):32--54, 2008."},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-017-5663-3"},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1177\/0193841X04266432"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1080\/03610926.2015.1100742"},{"key":"e_1_3_2_2_15_1","first-page":"219","volume-title":"H Chi, and Alex Beutel. Counterfactual fairness in text classification through robustness","author":"Garg Sahaj","year":"2019","unstructured":"Sahaj Garg , Vincent Perot , Nicole Limtiaco , Ankur Taly , Ed H Chi, and Alex Beutel. Counterfactual fairness in text classification through robustness . In Association for the Advancement of Artificial Intelligence (AAAI), pages 219 -- 226 , 2019 . Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi, and Alex Beutel. Counterfactual fairness in text classification through robustness. In Association for the Advancement of Artificial Intelligence (AAAI), pages 219--226, 2019."},{"key":"e_1_3_2_2_16_1","volume-title":"International Conference on Learning Representations (ICLR)","author":"Goodfellow Ian J","year":"2015","unstructured":"Ian J Goodfellow , Jonathon Shlens , and Christian Szegedy . Explaining and harnessing adversarial examples . In International Conference on Learning Representations (ICLR) , 2015 . Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015."},{"key":"e_1_3_2_2_17_1","first-page":"6151","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Gunasekar Suriya","year":"2017","unstructured":"Suriya Gunasekar , Blake E Woodworth , Srinadh Bhojanapalli , Behnam Neyshabur , and Nati Srebro . Implicit regularization in matrix factorization . In Advances in Neural Information Processing Systems (NeurIPS) , pages 6151 -- 6159 , 2017 . Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. In Advances in Neural Information Processing Systems (NeurIPS), pages 6151--6159, 2017."},{"key":"e_1_3_2_2_18_1","first-page":"3315","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Hardt Moritz","year":"2016","unstructured":"Moritz Hardt , Eric Price , and Nathan Srebo . Equality of opportunity in supervised learning . In Advances in Neural Information Processing Systems (NeurIPS) , pages 3315 -- 3323 , 2016 . Moritz Hardt, Eric Price, and Nathan Srebo. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems (NeurIPS), pages 3315--3323, 2016."},{"key":"e_1_3_2_2_19_1","series-title":"ETS Research Report Series","volume-title":"Causation and race","author":"Holland Paul W","year":"2003","unstructured":"Paul W Holland . Causation and race . ETS Research Report Series , 2003 (1), 2003. Paul W Holland. Causation and race. ETS Research Report Series, 2003(1), 2003."},{"key":"e_1_3_2_2_20_1","volume-title":"Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175","author":"Ilyas Andrew","year":"2019","unstructured":"Andrew Ilyas , Shibani Santurkar , Dimitris Tsipras , Logan Engstrom , Brandon Tran , and Aleksander Madry . Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175 , 2019 . Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019."},{"key":"e_1_3_2_2_21_1","volume-title":"International Conference on Machine Learning(ICML)","author":"Khani Fereshte","year":"2020","unstructured":"Fereshte Khani and Percy Liang . Feature noise induces loss discrepancy across groups . In International Conference on Machine Learning(ICML) , 2020 . Fereshte Khani and Percy Liang. Feature noise induces loss discrepancy across groups. In International Conference on Machine Learning(ICML), 2020."},{"key":"e_1_3_2_2_22_1","volume-title":"Maximum weighted loss discrepancy. arXiv preprint arXiv:1906.03518","author":"Khani Fereshte","year":"2019","unstructured":"Fereshte Khani , Aditi Raghunathan , and Percy Liang . Maximum weighted loss discrepancy. arXiv preprint arXiv:1906.03518 , 2019 . Fereshte Khani, Aditi Raghunathan, and Percy Liang. Maximum weighted loss discrepancy. arXiv preprint arXiv:1906.03518, 2019."},{"key":"e_1_3_2_2_23_1","first-page":"656","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Kilbertus Niki","year":"2017","unstructured":"Niki Kilbertus , Mateo Rojas Carulla , Giambattista Parascandolo , Moritz Hardt , Dominik Janzing , and Bernhard Sch\u00f6lkopf . Avoiding discrimination through causal reasoning . In Advances in Neural Information Processing Systems (NeurIPS) , pages 656 -- 666 , 2017 . Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Sch\u00f6lkopf. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems (NeurIPS), pages 656--666, 2017."},{"key":"e_1_3_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3328526.3329621"},{"key":"e_1_3_2_2_25_1","first-page":"4069","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Kusner Matt J","year":"2017","unstructured":"Matt J Kusner , Joshua R Loftus , Chris Russell , and Ricardo Silva . Counterfactual fairness . In Advances in Neural Information Processing Systems (NeurIPS) , pages 4069 -- 4079 , 2017 . Matt J Kusner, Joshua R Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems (NeurIPS), pages 4069--4079, 2017."},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.425"},{"key":"e_1_3_2_2_28_1","volume-title":"Causal reasoning for algorithmic fairness. arXiv preprint arXiv:1805.05859","author":"Loftus Joshua R","year":"2018","unstructured":"Joshua R Loftus , Chris Russell , Matt J Kusner , and Ricardo Silva . Causal reasoning for algorithmic fairness. arXiv preprint arXiv:1805.05859 , 2018 . Joshua R Loftus, Chris Russell, Matt J Kusner, and Ricardo Silva. Causal reasoning for algorithmic fairness. arXiv preprint arXiv:1805.05859, 2018."},{"key":"e_1_3_2_2_29_1","volume-title":"The variational fair autoencoder. arXiv preprint arXiv:1511.00830","author":"Louizos Christos","year":"2015","unstructured":"Christos Louizos , Kevin Swersky , Yujia Li , Max Welling , and Richard Zemel . The variational fair autoencoder. arXiv preprint arXiv:1511.00830 , 2015 . Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. arXiv preprint arXiv:1511.00830, 2015."},{"key":"e_1_3_2_2_30_1","volume-title":"Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309","author":"Madras David","year":"2018","unstructured":"David Madras , Elliot Creager , Toniann Pitassi , and Richard Zemel . Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309 , 2018 . David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309, 2018."},{"key":"e_1_3_2_2_31_1","volume-title":"Towards deep learning models resistant to adversarial attacks (published at ICLR","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry , Aleksandar Makelov , Ludwig Schmidt , Dimitris Tsipras , and Adrian Vladu . Towards deep learning models resistant to adversarial attacks (published at ICLR 2018 ). arXiv, 2017. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks (published at ICLR 2018). arXiv, 2017."},{"key":"e_1_3_2_2_32_1","volume-title":"Cheng Soon Ong, and Robert C Williamson. Provably fair representations. arXiv preprint arXiv:1710.04394","author":"McNamara Daniel","year":"2017","unstructured":"Daniel McNamara , Cheng Soon Ong, and Robert C Williamson. Provably fair representations. arXiv preprint arXiv:1710.04394 , 2017 . Daniel McNamara, Cheng Soon Ong, and Robert C Williamson. Provably fair representations. arXiv preprint arXiv:1710.04394, 2017."},{"key":"e_1_3_2_2_33_1","volume-title":"The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355","author":"Mei Song","year":"2019","unstructured":"Song Mei and Andrea Montanari . The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355 , 2019 . Song Mei and Andrea Montanari. The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355, 2019."},{"key":"e_1_3_2_2_34_1","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Najafi Amir","year":"2019","unstructured":"Amir Najafi , Shin ichi Maeda , Masanori Koyama , and Takeru Miyato . Robustness to adversarial perturbations in learning from incomplete data . In Advances in Neural Information Processing Systems (NeurIPS) , 2019 . Amir Najafi, Shin ichi Maeda, Masanori Koyama, and Takeru Miyato. Robustness to adversarial perturbations in learning from incomplete data. In Advances in Neural Information Processing Systems (NeurIPS), 2019."},{"key":"e_1_3_2_2_35_1","volume-title":"Adversarial robustness may be at odds with simplicity. arXiv preprint arXiv:1901.00532","author":"Nakkiran Preetum","year":"2019","unstructured":"Preetum Nakkiran . Adversarial robustness may be at odds with simplicity. arXiv preprint arXiv:1901.00532 , 2019 . Preetum Nakkiran. Adversarial robustness may be at odds with simplicity. arXiv preprint arXiv:1901.00532, 2019."},{"key":"e_1_3_2_2_36_1","volume-title":"Deep double descent: Where bigger models and more data hurt. arXiv preprint arXiv:1912.02292","author":"Nakkiran Preetum","year":"2019","unstructured":"Preetum Nakkiran , Gal Kaplun , Yamini Bansal , Tristan Yang , Boaz Barak , and Ilya Sutskever . Deep double descent: Where bigger models and more data hurt. arXiv preprint arXiv:1912.02292 , 2019 . Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. arXiv preprint arXiv:1912.02292, 2019."},{"key":"e_1_3_2_2_37_1","first-page":"4951","volume-title":"International Conference on Machine Learning (ICML)","author":"Oymak Samet","year":"2019","unstructured":"Samet Oymak and Mahdi Soltanolkotabi . Overparameterized nonlinear learning: Gradient descent takes the shortest path ? In International Conference on Machine Learning (ICML) , pages 4951 -- 4960 , 2019 . Samet Oymak and Mahdi Soltanolkotabi. Overparameterized nonlinear learning: Gradient descent takes the shortest path? In International Conference on Machine Learning (ICML), pages 4951--4960, 2019."},{"key":"e_1_3_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00842"},{"key":"e_1_3_2_2_39_1","volume-title":"International Conference on Machine Learning (ICML)","author":"Raghunathan Aditi","year":"2020","unstructured":"Aditi Raghunathan , Sang Michael Xie , Fanny Yang , John C. Duchi , and Percy Liang . Understanding and mitigating the tradeoff between robustness and accuracy . In International Conference on Machine Learning (ICML) , 2020 . Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, and Percy Liang. Understanding and mitigating the tradeoff between robustness and accuracy. In International Conference on Machine Learning (ICML), 2020."},{"key":"e_1_3_2_2_40_1","volume-title":"International Conference on Knowledge Discovery and Data Mining (KDD)","author":"Ribeiro Marco Tulio","year":"2016","unstructured":"Marco Tulio Ribeiro , Sameer Singh , and Carlos Guestrin . \" Why Should I Trust You?\" : Explaining the predictions of any classifier . In International Conference on Knowledge Discovery and Data Mining (KDD) , 2016 . Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. \"Why Should I Trust You?\": Explaining the predictions of any classifier. In International Conference on Knowledge Discovery and Data Mining (KDD), 2016."},{"key":"e_1_3_2_2_41_1","volume-title":"Mitigating gender bias in natural language processing: Literature review. arXiv preprint arXiv:1906.08976","author":"Sun Tony","year":"2019","unstructured":"Tony Sun , Andrew Gaut , Shirlyn Tang , Yuxin Huang , Mai ElSherief , Jieyu Zhao , Diba Mirza , Elizabeth Belding , Kai-Wei Chang , and William Yang Wang . Mitigating gender bias in natural language processing: Literature review. arXiv preprint arXiv:1906.08976 , 2019 . Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. Mitigating gender bias in natural language processing: Literature review. arXiv preprint arXiv:1906.08976, 2019."},{"key":"e_1_3_2_2_42_1","volume-title":"International Conference on Learning Representations (ICLR)","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy , Wojciech Zaremba , Ilya Sutskever , Joan Bruna , Dumitru Erhan , Ian Goodfellow , and Rob Fergus . Intriguing properties of neural networks . In International Conference on Learning Representations (ICLR) , 2014 . Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014."},{"key":"e_1_3_2_2_43_1","volume-title":"There is no free lunch in adversarial robustness (but there are unexpected benefits). arXiv preprint arXiv:1805.12152","author":"Tsipras Dimitris","year":"2018","unstructured":"Dimitris Tsipras , Shibani Santurkar , Logan Engstrom , Alexander Turner , and Aleksander Madry . There is no free lunch in adversarial robustness (but there are unexpected benefits). arXiv preprint arXiv:1805.12152 , 2018 . Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. There is no free lunch in adversarial robustness (but there are unexpected benefits). arXiv preprint arXiv:1805.12152, 2018."},{"key":"e_1_3_2_2_44_1","volume-title":"International Conference on Learning Representations (ICLR)","author":"Tsipras Dimitris","year":"2019","unstructured":"Dimitris Tsipras , Shibani Santurkar , Logan Engstrom , Alexander Turner , and Aleksander Madry . Robustness may be at odds with accuracy . In International Conference on Learning Representations (ICLR) , 2019 . Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In International Conference on Learning Representations (ICLR), 2019."},{"key":"e_1_3_2_2_45_1","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Uesato Jonathan","year":"2019","unstructured":"Jonathan Uesato , Jean-Baptiste Alayrac , Po-Sen Huang , Robert Stanforth , Alhussein Fawzi , and Pushmeet Kohli . Are labels required for improving adversarial robustness ? In Advances in Neural Information Processing Systems (NeurIPS) , 2019 . Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Robert Stanforth, Alhussein Fawzi, and Pushmeet Kohli. Are labels required for improving adversarial robustness? In Advances in Neural Information Processing Systems (NeurIPS), 2019."},{"key":"e_1_3_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2019.00541"},{"key":"e_1_3_2_2_47_1","first-page":"1920","volume-title":"Conference on Learning Theory (COLT)","author":"Woodworth Blake","year":"2017","unstructured":"Blake Woodworth , Suriya Gunasekar , Mesrob I. Ohannessian , and Nathan Srebro . Learning non-discriminatory predictors . In Conference on Learning Theory (COLT) , pages 1920 -- 1953 , 2017 . Blake Woodworth, Suriya Gunasekar, Mesrob I. Ohannessian, and Nathan Srebro. Learning non-discriminatory predictors. In Conference on Learning Theory (COLT), pages 1920--1953, 2017."},{"key":"e_1_3_2_2_48_1","volume-title":"Noise or signal: The role of image backgrounds in object recognition. arXiv preprint arXiv:2006.09994","author":"Xiao Kai","year":"2020","unstructured":"Kai Xiao , Logan Engstrom , Andrew Ilyas , and Aleksander Madry . Noise or signal: The role of image backgrounds in object recognition. arXiv preprint arXiv:2006.09994 , 2020 . Kai Xiao, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. Noise or signal: The role of image backgrounds in object recognition. arXiv preprint arXiv:2006.09994, 2020."},{"key":"e_1_3_2_2_49_1","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Yin Dong","year":"2019","unstructured":"Dong Yin , Raphael Gontijo Lopes , Jonathon Shlens , Ekin D Cubuk , and Justin Gilmer . A fourier perspective on model robustness in computer vision . In Advances in Neural Information Processing Systems (NeurIPS) , 2019 . Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D Cubuk, and Justin Gilmer. A fourier perspective on model robustness in computer vision. In Advances in Neural Information Processing Systems (NeurIPS), 2019."},{"key":"e_1_3_2_2_50_1","volume-title":"Sensei: Sensitive set invariance for enforcing individual fairness. arXiv preprint arXiv:2006.14168","author":"Yurochkin Mikhail","year":"2020","unstructured":"Mikhail Yurochkin and Yuekai Sun . Sensei: Sensitive set invariance for enforcing individual fairness. arXiv preprint arXiv:2006.14168 , 2020 . Mikhail Yurochkin and Yuekai Sun. Sensei: Sensitive set invariance for enforcing individual fairness. arXiv preprint arXiv:2006.14168, 2020."},{"key":"e_1_3_2_2_51_1","first-page":"325","volume-title":"International Conference on Machine Learning (ICML)","author":"Zemel Richard","year":"2013","unstructured":"Richard Zemel , Yu Wu , Kevin Swersky , Toniann Pitassi , and Cynthia Dwork . Learning fair representations . In International Conference on Machine Learning (ICML) , pages 325 -- 333 , 2013 . Richard Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. Learning fair representations. In International Conference on Machine Learning (ICML), pages 325--333, 2013."},{"key":"e_1_3_2_2_52_1","volume-title":"International Conference on Machine Learning(ICML)","author":"Zhang Hongyang","year":"2019","unstructured":"Hongyang Zhang , Yaodong Yu , Jiantao Jiao , Eric P Xing , Laurent El Ghaoui , and Michael I Jordan . Theoretically principled trade-off between robustness and accuracy . In International Conference on Machine Learning(ICML) , 2019 . Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning(ICML), 2019."},{"key":"e_1_3_2_2_53_1","volume-title":"Advances in Neural Information Processing Systems (NeurIPS)","author":"Zhao H.","year":"2019","unstructured":"H. Zhao and Geoff Gordon . Inherent tradeoffs in learning fair representations . In Advances in Neural Information Processing Systems (NeurIPS) , 2019 . H. Zhao and Geoff Gordon. Inherent tradeoffs in learning fair representations. In Advances in Neural Information Processing Systems (NeurIPS), 2019."},{"key":"e_1_3_2_2_54_1","volume-title":"Gender bias in coreference resolution: Evaluation and debiasing methods","author":"Zhao Jieyu","year":"2018","unstructured":"Jieyu Zhao , Tianlu Wang , Mark Yatskar , Vicente Ordo\u00f1ez , and Kai-Wei Chang . Gender bias in coreference resolution: Evaluation and debiasing methods . In North American Association for Computational Linguistics (NAACL) , 2018 . Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordo\u00f1ez, and Kai-Wei Chang. Gender bias in coreference resolution: Evaluation and debiasing methods. In North American Association for Computational Linguistics (NAACL), 2018."},{"key":"e_1_3_2_2_55_1","volume-title":"Learning gender-neutral word embeddings. arXiv preprint arXiv:1809.01496","author":"Zhao Jieyu","year":"2018","unstructured":"Jieyu Zhao , Yichao Zhou , Zeyu Li , Wei Wang , and Kai-Wei Chang . Learning gender-neutral word embeddings. arXiv preprint arXiv:1809.01496 , 2018 . Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. Learning gender-neutral word embeddings. arXiv preprint arXiv:1809.01496, 2018."}],"event":{"name":"FAccT '21: 2021 ACM Conference on Fairness, Accountability, and Transparency","sponsor":["ACM Association for Computing Machinery"],"location":"Virtual Event Canada","acronym":"FAccT '21"},"container-title":["Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3442188.3445883","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,2]],"date-time":"2023-02-02T20:00:49Z","timestamp":1675368049000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445883"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,3]]},"references-count":55,"alternative-id":["10.1145\/3442188.3445883","10.1145\/3442188"],"URL":"https:\/\/doi.org\/10.1145\/3442188.3445883","relation":{},"subject":[],"published":{"date-parts":[[2021,3]]},"assertion":[{"value":"2021-03-01","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}