{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T01:19:32Z","timestamp":1740100772781,"version":"3.37.3"},"publisher-location":"New York, NY, USA","reference-count":105,"publisher":"ACM","funder":[{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2017R1E1A1A01076400"],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100010446","name":"Institute for Basic Science","doi-asserted-by":"publisher","award":["IBS-R029-C2"],"id":[{"id":"10.13039\/501100010446","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,6,21]]},"DOI":"10.1145\/3531146.3534628","type":"proceedings-article","created":{"date-parts":[[2022,6,20]],"date-time":"2022-06-20T14:27:10Z","timestamp":1655735230000},"page":"2103-2113","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":20,"title":["The Conflict Between Explainable and Accountable Decision-Making Algorithms"],"prefix":"10.1145","author":[{"given":"Gabriel","family":"Lima","sequence":"first","affiliation":[{"name":"School of Computing, KAIST, Republic of Korea and Data Science Group, Institute for Basic Science, Republic of Korea"}]},{"given":"Nina","family":"Grgi\u0107-Hla\u010da","sequence":"additional","affiliation":[{"name":"Max Planck Institute for Software Systems, Germany and Max Planck Institute for Research on Collective Goods, Germany"}]},{"given":"Jin Keun","family":"Jeong","sequence":"additional","affiliation":[{"name":"School of Law, Kangwon National University, Republic of Korea"}]},{"given":"Meeyoung","family":"Cha","sequence":"additional","affiliation":[{"name":"Data Science Group, Institute for Basic Science, Republic of Korea and School of Computing, KAIST, Republic of Korea"}]}],"member":"320","published-online":{"date-parts":[[2022,6,20]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1177\/2053951719860542"},{"key":"e_1_3_2_1_2_1","unstructured":"Julia Angwin Madeleine Varner and Ariana Tobin. 2016. Machine Bias: There\u2019s Software Used Across the Country to Predict Future Criminals. And it\u2019s Biased Against Blacks.https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing. Julia Angwin Madeleine Varner and Ariana Tobin. 2016. Machine Bias: There\u2019s Software Used Across the Country to Predict Future Criminals. And it\u2019s Biased Against Blacks.https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2019.12.012"},{"key":"e_1_3_2_1_4_1","unstructured":"Peter\u00a0M Asaro. 2016. The Liability Problem for Autonomous Artificial Agents.. In AAAI Spring Symposia. 190\u2013194. Peter\u00a0M Asaro. 2016. The Liability Problem for Autonomous Artificial Agents.. In AAAI Spring Symposia. 190\u2013194."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3339904"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1609\/hcomp.v7i1.5285"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445717"},{"key":"e_1_3_2_1_8_1","first-page":"671","article-title":"Big data\u2019s disparate impact","volume":"104","author":"Barocas Solon","year":"2016","unstructured":"Solon Barocas and Andrew\u00a0 D Selbst . 2016 . Big data\u2019s disparate impact . Calif. L. Rev. 104 (2016), 671 . Solon Barocas and Andrew\u00a0D Selbst. 2016. Big data\u2019s disparate impact. Calif. L. Rev. 104(2016), 671.","journal-title":"Calif. L. Rev."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372830"},{"key":"e_1_3_2_1_10_1","unstructured":"Adrien Bibal Michael Lognoul Alexandre de Streel and Beno\u00eet Fr\u00e9nay. 2020. Legal requirements on explainability in machine learning. Artificial Intelligence and Law(2020) 1\u201321. Adrien Bibal Michael Lognoul Alexandre de Streel and Beno\u00eet Fr\u00e9nay. 2020. Legal requirements on explainability in machine learning. Artificial Intelligence and Law(2020) 1\u201321."},{"volume-title":"The moral psychology of AI and the ethical opt-out problem","author":"Bonnefon Jean-Fran\u00e7ois","key":"e_1_3_2_1_11_1","unstructured":"Jean-Fran\u00e7ois Bonnefon , Azim Shariff , and Iyad Rahwan . 2020. The moral psychology of AI and the ethical opt-out problem . Oxford University Press , Oxford, UK . Jean-Fran\u00e7ois Bonnefon, Azim Shariff, and Iyad Rahwan. 2020. The moral psychology of AI and the ethical opt-out problem. Oxford University Press, Oxford, UK."},{"key":"e_1_3_2_1_12_1","volume-title":"Analysing and assessing accountability: A conceptual framework. European law journal 13, 4","author":"Bovens Mark","year":"2007","unstructured":"Mark Bovens . 2007. Analysing and assessing accountability: A conceptual framework. European law journal 13, 4 ( 2007 ), 447\u2013468. Mark Bovens. 2007. Analysing and assessing accountability: A conceptual framework. European law journal 13, 4 (2007), 447\u2013468."},{"key":"e_1_3_2_1_13_1","unstructured":"Harry Brignull Marc Miquel Jeremy Rosenberg and James Offer. 2015. Dark Patterns-User Interfaces Designed to Trick People. Harry Brignull Marc Miquel Jeremy Rosenberg and James Offer. 2015. Dark Patterns-User Interfaces Designed to Trick People."},{"key":"e_1_3_2_1_14_1","volume-title":"Can artificial intelligences be moral agents?New Ideas in Psychology 54","author":"Bro\u017cek Bartosz","year":"2019","unstructured":"Bartosz Bro\u017cek and Bartosz Janik . 2019. Can artificial intelligences be moral agents?New Ideas in Psychology 54 ( 2019 ), 101\u2013106. Bartosz Bro\u017cek and Bartosz Janik. 2019. Can artificial intelligences be moral agents?New Ideas in Psychology 54 (2019), 101\u2013106."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Joanna\u00a0J Bryson. 2010. Robots should be slaves. Close Engagements with Artificial Companions: Key social psychological ethical and design issues 8(2010) 63\u201374. Joanna\u00a0J Bryson. 2010. Robots should be slaves. Close Engagements with Artificial Companions: Key social psychological ethical and design issues 8(2010) 63\u201374.","DOI":"10.1075\/nlp.8.11bry"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10506-017-9214-9"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3449287"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1177\/2053951715622512"},{"key":"e_1_3_2_1_19_1","unstructured":"Stephen Cave Claire Craig Kanta Dihal Sarah Dillon Jessica Montgomery Beth Singler and Lindsay Taylor. 2018. Portrayals and perceptions of AI and why they matter. (2018). Stephen Cave Claire Craig Kanta Dihal Sarah Dillon Jessica Montgomery Beth Singler and Lindsay Taylor. 2018. Portrayals and perceptions of AI and why they matter. (2018)."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.clsr.2015.03.008"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-013-0138-3"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445921"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-009-0208-3"},{"key":"e_1_3_2_1_24_1","volume-title":"Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics 26, 4","author":"Coeckelbergh Mark","year":"2020","unstructured":"Mark Coeckelbergh . 2020. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics 26, 4 ( 2020 ), 2051\u20132068. Mark Coeckelbergh. 2020. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics 26, 4 (2020), 2051\u20132068."},{"key":"e_1_3_2_1_25_1","volume-title":"Statement on algorithmic transparency and accountability. Commun. ACM","author":"ACM US Public\u00a0Policy Council","year":"2017","unstructured":"ACM US Public\u00a0Policy Council . 2017. Statement on algorithmic transparency and accountability. Commun. ACM ( 2017 ). ACM US Public\u00a0Policy Council. 2017. Statement on algorithmic transparency and accountability. Commun. ACM (2017)."},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10676-016-9403-3"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1515\/popets-2015-0007"},{"key":"e_1_3_2_1_28_1","volume-title":"Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability?Philosophy & technology 31, 4","author":"De\u00a0Laat B","year":"2018","unstructured":"Paul\u00a0 B De\u00a0Laat . 2018. Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability?Philosophy & technology 31, 4 ( 2018 ), 525\u2013541. Paul\u00a0B De\u00a0Laat. 2018. Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability?Philosophy & technology 31, 4 (2018), 525\u2013541."},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v28i1.8748"},{"key":"e_1_3_2_1_30_1","unstructured":"Filippo\u00a0Santoni de Sio and Giulio Mecacci. 2021. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. Philosophy & Technology(2021) 1\u201328. Filippo\u00a0Santoni de Sio and Giulio Mecacci. 2021. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. Philosophy & Technology(2021) 1\u201328."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"crossref","unstructured":"Finale Doshi-Velez Mason Kortz Ryan Budish Chris Bavitz Sam Gershman David O\u2019Brien Kate Scott Stuart Schieber James Waldo David Weinberger 2017. Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134(2017). Finale Doshi-Velez Mason Kortz Ryan Budish Chris Bavitz Sam Gershman David O\u2019Brien Kate Scott Stuart Schieber James Waldo David Weinberger 2017. Accountability of AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134(2017).","DOI":"10.2139\/ssrn.3064761"},{"key":"e_1_3_2_1_32_1","unstructured":"Upol Ehsan Samir Passi Q\u00a0Vera Liao Larry Chan I Lee Michael Muller Mark\u00a0O Riedl 2021. The who in explainable AI: How AI background shapes perceptions of AI explanations. arXiv preprint arXiv:2107.13509(2021). Upol Ehsan Samir Passi Q\u00a0Vera Liao Larry Chan I Lee Michael Muller Mark\u00a0O Riedl 2021. The who in explainable AI: How AI background shapes perceptions of AI explanations. arXiv preprint arXiv:2107.13509(2021)."},{"key":"e_1_3_2_1_33_1","volume-title":"Explainability Pitfalls: Beyond Dark Patterns in Explainable AI. arXiv preprint arXiv:2109.12480(2021).","author":"Ehsan Upol","year":"2021","unstructured":"Upol Ehsan and Mark\u00a0 O Riedl . 2021 . Explainability Pitfalls: Beyond Dark Patterns in Explainable AI. arXiv preprint arXiv:2109.12480(2021). Upol Ehsan and Mark\u00a0O Riedl. 2021. Explainability Pitfalls: Beyond Dark Patterns in Explainable AI. arXiv preprint arXiv:2109.12480(2021)."},{"key":"e_1_3_2_1_34_1","unstructured":"Madeleine\u00a0Clare Elish. 2019. Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science Technology and Society(2019). Madeleine\u00a0Clare Elish. 2019. Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science Technology and Society(2019)."},{"key":"e_1_3_2_1_35_1","unstructured":"European Commission. 2019. Liability for artificial intelligence and other emerging digital technologies. https:\/\/op.europa.eu\/en\/publication-detail\/-\/publication\/1c5e30be-1197-11ea-8c1f-01aa75ed71a1\/language-en\/format-PDF European Commission. 2019. Liability for artificial intelligence and other emerging digital technologies. https:\/\/op.europa.eu\/en\/publication-detail\/-\/publication\/1c5e30be-1197-11ea-8c1f-01aa75ed71a1\/language-en\/format-PDF"},{"key":"e_1_3_2_1_36_1","unstructured":"European Commission. 2021. Communication From the Commission to the European Parliament the Council the European Economic and Social Committee and the Committee of the Regions Empty: Fostering a European approach to Artificial Intelligence. https:\/\/eur-lex.europa.eu\/legal-content\/EN\/ALL\/?uri=COM:2021:205:FIN European Commission. 2021. Communication From the Commission to the European Parliament the Council the European Economic and Social Committee and the Committee of the Regions Empty: Fostering a European approach to Artificial Intelligence. https:\/\/eur-lex.europa.eu\/legal-content\/EN\/ALL\/?uri=COM:2021:205:FIN"},{"key":"e_1_3_2_1_37_1","volume-title":"Altruistic punishment in humans. Nature 415, 6868","author":"Fehr Ernst","year":"2002","unstructured":"Ernst Fehr and Simon G\u00e4chter . 2002. Altruistic punishment in humans. Nature 415, 6868 ( 2002 ), 137\u2013140. Ernst Fehr and Simon G\u00e4chter. 2002. Altruistic punishment in humans. Nature 415, 6868 (2002), 137\u2013140."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13347-019-00354-x"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-018-9482-5"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.isci.2021.102252"},{"key":"e_1_3_2_1_41_1","volume-title":"Attributing blame to robots: I. The influence of robot autonomy. Human factors","author":"Furlough Caleb","year":"2019","unstructured":"Caleb Furlough , Thomas Stokes , and Douglas\u00a0 J Gillan . 2019. Attributing blame to robots: I. The influence of robot autonomy. Human factors ( 2019 ), 0018720819880641. Caleb Furlough, Thomas Stokes, and Douglas\u00a0J Gillan. 2019. Attributing blame to robots: I. The influence of robot autonomy. Human factors (2019), 0018720819880641."},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1609\/aimag.v38i3.2741"},{"key":"e_1_3_2_1_43_1","volume-title":"Artificial moral and legal personhood","author":"Gordon John-Stewart","year":"2020","unstructured":"John-Stewart Gordon . 2020. Artificial moral and legal personhood . AI & Society ( 2020 ), 1\u201315. John-Stewart Gordon. 2020. Artificial moral and legal personhood. AI & Society (2020), 1\u201315."},{"key":"e_1_3_2_1_44_1","unstructured":"David\u00a0J Gunkel. 2017. Mind the gap: responsible robotics and the problem of responsibility. Ethics and Information Technology(2017) 1\u201314. David\u00a0J Gunkel. 2017. Mind the gap: responsible robotics and the problem of responsibility. Ethics and Information Technology(2017) 1\u201314."},{"key":"e_1_3_2_1_45_1","volume-title":"Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and information technology 11, 1","author":"Hanson F\u00a0Allan","year":"2009","unstructured":"F\u00a0Allan Hanson . 2009. Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and information technology 11, 1 ( 2009 ), 91\u201399. F\u00a0Allan Hanson. 2009. Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and information technology 11, 1 (2009), 91\u201399."},{"key":"e_1_3_2_1_46_1","volume-title":"Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?Ethics and Information Technology 11, 1","author":"Himma Kenneth\u00a0Einar","year":"2009","unstructured":"Kenneth\u00a0Einar Himma . 2009. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?Ethics and Information Technology 11, 1 ( 2009 ), 19\u201329. Kenneth\u00a0Einar Himma. 2009. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?Ethics and Information Technology 11, 1 (2009), 19\u201329."},{"key":"e_1_3_2_1_47_1","volume-title":"How machine-learning recommendations influence clinician treatment selections: the example of the antidepressant selection. Translational psychiatry 11, 1","author":"Jacobs Maia","year":"2021","unstructured":"Maia Jacobs , Melanie\u00a0 F Pradier , Thomas\u00a0 H McCoy , Roy\u00a0 H Perlis , Finale Doshi-Velez , and Krzysztof\u00a0 Z Gajos . 2021. How machine-learning recommendations influence clinician treatment selections: the example of the antidepressant selection. Translational psychiatry 11, 1 ( 2021 ), 1\u20139. Maia Jacobs, Melanie\u00a0F Pradier, Thomas\u00a0H McCoy, Roy\u00a0H Perlis, Finale Doshi-Velez, and Krzysztof\u00a0Z Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of the antidepressant selection. Translational psychiatry 11, 1 (2021), 1\u20139."},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0088-2"},{"key":"e_1_3_2_1_49_1","volume-title":"Computer systems: Moral entities but not moral agents. Ethics and information technology 8, 4","author":"Johnson G","year":"2006","unstructured":"Deborah\u00a0 G Johnson . 2006. Computer systems: Moral entities but not moral agents. Ethics and information technology 8, 4 ( 2006 ), 195\u2013204. Deborah\u00a0G Johnson. 2006. Computer systems: Moral entities but not moral agents. Ethics and information technology 8, 4 (2006), 195\u2013204."},{"key":"e_1_3_2_1_50_1","volume-title":"Technology with no human responsibility?Journal of Business Ethics 127, 4","author":"Johnson G","year":"2015","unstructured":"Deborah\u00a0 G Johnson . 2015. Technology with no human responsibility?Journal of Business Ethics 127, 4 ( 2015 ), 707\u2013715. Deborah\u00a0G Johnson. 2015. Technology with no human responsibility?Journal of Business Ethics 127, 4 (2015), 707\u2013715."},{"key":"e_1_3_2_1_51_1","unstructured":"Shalmali Joshi Oluwasanmi Koyejo Warut Vijitbenjaronk Been Kim and Joydeep Ghosh. 2019. Towards realistic individual recourse and actionable explanations in black-box decision making systems. arXiv preprint arXiv:1907.09615(2019). Shalmali Joshi Oluwasanmi Koyejo Warut Vijitbenjaronk Been Kim and Joydeep Ghosh. 2019. Towards realistic individual recourse and actionable explanations in black-box decision making systems. arXiv preprint arXiv:1907.09615(2019)."},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445886"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376219"},{"key":"e_1_3_2_1_54_1","unstructured":"Lauren Kirchner. 2020. Can Algorithms Violate Fair Housing Laws?The Markup. https:\/\/themarkup.org\/locked-out\/2020\/09\/24\/fair-housing-laws-algorithms-tenant-screenings. Lauren Kirchner. 2020. Can Algorithms Violate Fair Housing Laws?The Markup. https:\/\/themarkup.org\/locked-out\/2020\/09\/24\/fair-housing-laws-algorithms-tenant-screenings."},{"key":"e_1_3_2_1_55_1","unstructured":"Kirsten Korosec. 2015. Volvo CEo: we will Accept All Liability when our Cars Are in Autonomous Mode. http:\/\/fortune.com\/2015\/10\/07\/volvo-liability-self-driving-cars\/. Kirsten Korosec. 2015. Volvo CEo: we will Accept All Liability when our Cars Are in Autonomous Mode. http:\/\/fortune.com\/2015\/10\/07\/volvo-liability-self-driving-cars\/."},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3442188.3445937"},{"key":"e_1_3_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2021.103473"},{"key":"e_1_3_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3308532.3329465"},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2021.756242"},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445260"},{"key":"e_1_3_2_1_61_1","volume-title":"The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3","author":"Lipton C","year":"2018","unstructured":"Zachary\u00a0 C Lipton . 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3 ( 2018 ), 31\u201357. Zachary\u00a0C Lipton. 2018. The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.Queue 16, 3 (2018), 31\u201357."},{"key":"e_1_3_2_1_62_1","unstructured":"Peng Liu Manqing Du and Tingting Li. 2021. Psychological consequences of legal responsibility misattribution associated with automated vehicles. Ethics and information technology(2021) 1\u201314. Peng Liu Manqing Du and Tingting Li. 2021. Psychological consequences of legal responsibility misattribution associated with automated vehicles. Ethics and information technology(2021) 1\u201314."},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1163\/156853706776931358"},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1080\/1047840X.2014.877340"},{"key":"e_1_3_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.1006\/jesp.1996.1314"},{"volume-title":"Robotics and well-being","author":"Malle F","key":"e_1_3_2_1_66_1","unstructured":"Bertram\u00a0 F Malle , Stuti\u00a0Thapa Magar , and Matthias Scheutz . 2019. AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma . In Robotics and well-being . Springer , 111\u2013133. Bertram\u00a0F Malle, Stuti\u00a0Thapa Magar, and Matthias Scheutz. 2019. AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. In Robotics and well-being. Springer, 111\u2013133."},{"key":"e_1_3_2_1_67_1","volume-title":"2015 10th ACM\/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 117\u2013124","author":"Malle F","year":"2015","unstructured":"Bertram\u00a0 F Malle , Matthias Scheutz , Thomas Arnold , John Voiklis , and Corey Cusimano . 2015 . Sacrifice one for the good of many? People apply different moral norms to human and robot agents . In 2015 10th ACM\/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 117\u2013124 . Bertram\u00a0F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. 2015. Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 2015 10th ACM\/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 117\u2013124."},{"key":"e_1_3_2_1_68_1","volume-title":"Do we adopt the intentional stance toward humanoid robots?Frontiers in psychology 10","author":"Marchesi Serena","year":"2019","unstructured":"Serena Marchesi , Davide Ghiglino , Francesca Ciardo , Jairo Perez-Osorio , Ebru Baykara , and Agnieszka Wykowska . 2019. Do we adopt the intentional stance toward humanoid robots?Frontiers in psychology 10 ( 2019 ), 450. Serena Marchesi, Davide Ghiglino, Francesca Ciardo, Jairo Perez-Osorio, Ebru Baykara, and Agnieszka Wykowska. 2019. Do we adopt the intentional stance toward humanoid robots?Frontiers in psychology 10 (2019), 450."},{"key":"e_1_3_2_1_69_1","volume-title":"The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology 6, 3","author":"Matthias Andreas","year":"2004","unstructured":"Andreas Matthias . 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology 6, 3 ( 2004 ), 175\u2013183. Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology 6, 3 (2004), 175\u2013183."},{"key":"e_1_3_2_1_70_1","volume-title":"Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267","author":"Miller Tim","year":"2019","unstructured":"Tim Miller . 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 ( 2019 ), 1\u201338. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1\u201338."},{"key":"e_1_3_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0114-4"},{"key":"e_1_3_2_1_72_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287574"},{"key":"e_1_3_2_1_73_1","unstructured":"Satya Nadella. 2016. The Partnership of the Future. https:\/\/slate.com\/technology\/2016\/06\/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html. Satya Nadella. 2016. The Partnership of the Future. https:\/\/slate.com\/technology\/2016\/06\/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html."},{"key":"e_1_3_2_1_74_1","volume-title":"Attributing agency to automated systems: Reflections on human\u2013robot collaborations and responsibility-loci. Science and engineering ethics 24, 4","author":"Nyholm Sven","year":"2018","unstructured":"Sven Nyholm . 2018. Attributing agency to automated systems: Reflections on human\u2013robot collaborations and responsibility-loci. Science and engineering ethics 24, 4 ( 2018 ), 1201\u20131219. Sven Nyholm. 2018. Attributing agency to automated systems: Reflections on human\u2013robot collaborations and responsibility-loci. Science and engineering ethics 24, 4 (2018), 1201\u20131219."},{"key":"e_1_3_2_1_75_1","volume-title":"Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464","author":"Obermeyer Ziad","year":"2019","unstructured":"Ziad Obermeyer , Brian Powers , Christine Vogeli , and Sendhil Mullainathan . 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 ( 2019 ), 447\u2013453. Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366, 6464 (2019), 447\u2013453."},{"volume-title":"The black box society","author":"Pasquale Frank","key":"e_1_3_2_1_76_1","unstructured":"Frank Pasquale . 2015. The black box society . Harvard University Press . Frank Pasquale. 2015. The black box society. Harvard University Press."},{"key":"e_1_3_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1080\/09515089.2019.1688778"},{"volume-title":"Handbook of the Law of Torts. Vol.\u00a04","author":"William\u00a0Lloyd","key":"e_1_3_2_1_78_1","unstructured":"William\u00a0Lloyd Prosser 1941. Handbook of the Law of Torts. Vol.\u00a04 . West Publishing . William\u00a0Lloyd Prosser 1941. Handbook of the Law of Torts. Vol.\u00a04. West Publishing."},{"key":"e_1_3_2_1_79_1","doi-asserted-by":"publisher","DOI":"10.1177\/2053951720942541"},{"key":"e_1_3_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11023-019-09509-3"},{"key":"e_1_3_2_1_82_1","volume-title":"Agency Laundering and Algorithmic Decision Systems. In International Conference on Information. Springer, 590\u2013598","author":"Rubel Alan","year":"2019","unstructured":"Alan Rubel , Adam Pham , and Clinton Castro . 2019 . Agency Laundering and Algorithmic Decision Systems. In International Conference on Information. Springer, 590\u2013598 . Alan Rubel, Adam Pham, and Clinton Castro. 2019. Agency Laundering and Algorithmic Decision Systems. In International Conference on Information. Springer, 590\u2013598."},{"key":"e_1_3_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0048-x"},{"key":"e_1_3_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.4018\/IJT.20210101.oa1"},{"key":"e_1_3_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.3389\/frobt.2018.00015"},{"volume-title":"Moral dimensions","author":"Scanlon M","key":"e_1_3_2_1_86_1","unstructured":"Thomas\u00a0 M Scanlon . 2009. Moral dimensions . Harvard University Press . Thomas\u00a0M Scanlon. 2009. Moral dimensions. Harvard University Press."},{"key":"e_1_3_2_1_87_1","first-page":"1085","article-title":"The intuitive appeal of explainable machines","volume":"87","author":"Selbst D","year":"2018","unstructured":"Andrew\u00a0 D Selbst and Solon Barocas . 2018 . The intuitive appeal of explainable machines . Fordham L. Rev. 87 (2018), 1085 . Andrew\u00a0D Selbst and Solon Barocas. 2018. The intuitive appeal of explainable machines. Fordham L. Rev. 87(2018), 1085.","journal-title":"Fordham L. Rev."},{"key":"e_1_3_2_1_88_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_3_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1086\/659003"},{"key":"e_1_3_2_1_90_1","volume-title":"Mastering the game of Go without human knowledge. Nature 550, 7676","author":"Silver David","year":"2017","unstructured":"David Silver , Julian Schrittwieser , Karen Simonyan , Ioannis Antonoglou , Aja Huang , Arthur Guez , Thomas Hubert , Lucas Baker , Matthew Lai , Adrian Bolton , 2017. Mastering the game of Go without human knowledge. Nature 550, 7676 ( 2017 ), 354\u2013359. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, 2017. Mastering the game of Go without human knowledge. Nature 550, 7676 (2017), 354\u2013359."},{"key":"e_1_3_2_1_91_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10506-016-9192-3"},{"key":"e_1_3_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1468-5930.2007.00346.x"},{"key":"e_1_3_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10676-006-9112-4"},{"key":"e_1_3_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.5555\/3114838.3115264"},{"volume-title":"Robot rules: Regulating artificial intelligence","author":"Turner Jacob","key":"e_1_3_2_1_95_1","unstructured":"Jacob Turner . 2018. Robot rules: Regulating artificial intelligence . Springer . Jacob Turner. 2018. Robot rules: Regulating artificial intelligence. Springer."},{"key":"e_1_3_2_1_96_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287566"},{"volume-title":"Moral Responsibility","author":"Poel Ibo Van\u00a0de","key":"e_1_3_2_1_97_1","unstructured":"Ibo Van\u00a0de Poel . 2011. The relation between forward-looking and backward-looking responsibility . In Moral Responsibility . Springer , 37\u201352. Ibo Van\u00a0de Poel. 2011. The relation between forward-looking and backward-looking responsibility. In Moral Responsibility. Springer, 37\u201352."},{"volume-title":"Moral responsibility and the problem of many hands","author":"Poel Ibo Van\u00a0de","key":"e_1_3_2_1_98_1","unstructured":"Ibo Van\u00a0de Poel . 2015. Moral responsibility . In Moral responsibility and the problem of many hands . Routledge , 24\u201361. Ibo Van\u00a0de Poel. 2015. Moral responsibility. In Moral responsibility and the problem of many hands. Routledge, 24\u201361."},{"key":"e_1_3_2_1_99_1","volume-title":"AI, and Humanity: Science, Ethics, and Policy","author":"van Wynsberghe Aimee","year":"2021","unstructured":"Aimee van Wynsberghe . 2021. Responsible Robotics and Responsibility Attribution. Robotics , AI, and Humanity: Science, Ethics, and Policy ( 2021 ), 239. Aimee van Wynsberghe. 2021. Responsible Robotics and Responsibility Attribution. Robotics, AI, and Humanity: Science, Ethics, and Policy (2021), 239."},{"key":"e_1_3_2_1_100_1","doi-asserted-by":"publisher","DOI":"10.9785\/cri-2021-220402"},{"key":"e_1_3_2_1_101_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372876"},{"key":"e_1_3_2_1_102_1","unstructured":"Sahil Verma John Dickerson and Keegan Hines. 2020. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596(2020). Sahil Verma John Dickerson and Keegan Hines. 2020. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596(2020)."},{"key":"e_1_3_2_1_103_1","first-page":"117","article-title":"Machines without principals: liability rules and artificial intelligence","volume":"89","author":"Vladeck C","year":"2014","unstructured":"David\u00a0 C Vladeck . 2014 . Machines without principals: liability rules and artificial intelligence . Wash. L. Rev. 89 (2014), 117 . David\u00a0C Vladeck. 2014. Machines without principals: liability rules and artificial intelligence. Wash. L. Rev. 89(2014), 117.","journal-title":"Wash. L. Rev."},{"key":"e_1_3_2_1_104_1","first-page":"841","article-title":"Counterfactual explanations without opening the black box: Automated decisions and the GDPR","volume":"31","author":"Wachter Sandra","year":"2017","unstructured":"Sandra Wachter , Brent Mittelstadt , and Chris Russell . 2017 . Counterfactual explanations without opening the black box: Automated decisions and the GDPR . Harv. JL & Tech. 31 (2017), 841 . Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31(2017), 841.","journal-title":"Harv. JL & Tech."},{"key":"e_1_3_2_1_105_1","unstructured":"Julie Weed. 2021. R\u00e9sum\u00e9-Writing Tips to Help You Get Past the A.I. Gatekeepers. New York Times. https:\/\/www.nytimes.com\/2021\/03\/19\/business\/resume-filter-articial-intelligence.html. Julie Weed. 2021. R\u00e9sum\u00e9-Writing Tips to Help You Get Past the A.I. Gatekeepers. New York Times. https:\/\/www.nytimes.com\/2021\/03\/19\/business\/resume-filter-articial-intelligence.html."}],"event":{"name":"FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency","sponsor":["ACM Association for Computing Machinery"],"location":"Seoul Republic of Korea","acronym":"FAccT '22"},"container-title":["2022 ACM Conference on Fairness, Accountability, and Transparency"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3531146.3534628","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,5,22]],"date-time":"2023-05-22T19:03:49Z","timestamp":1684782229000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3531146.3534628"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,20]]},"references-count":105,"alternative-id":["10.1145\/3531146.3534628","10.1145\/3531146"],"URL":"https:\/\/doi.org\/10.1145\/3531146.3534628","relation":{},"subject":[],"published":{"date-parts":[[2022,6,20]]},"assertion":[{"value":"2022-06-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}