{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,5,27]],"date-time":"2024-05-27T15:10:28Z","timestamp":1716822628449},"reference-count":17,"publisher":"Emerald","issue":"2","license":[{"start":{"date-parts":[[2019,5,13]],"date-time":"2019-05-13T00:00:00Z","timestamp":1557705600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.emerald.com\/insight\/site-policies"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["JICES"],"published-print":{"date-parts":[[2019,5,13]]},"abstract":"\nPurpose<\/jats:title>\nThe purpose of this paper is to report on empirical work conducted to open up algorithmic interpretability and transparency. In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that to make algorithms more fair, they should be made more transparent, but exactly how this can be achieved remains unclear.<\/jats:p>\n<\/jats:sec>\n\nDesign\/methodology\/approach<\/jats:title>\nAn empirical study was conducted to begin unpacking issues around algorithmic interpretability and transparency. The study involved discussion-based experiments centred around a limited resource allocation scenario which required participants to select their most and least preferred algorithms in a particular context. In addition to collecting quantitative data about preferences, qualitative data captured participants\u2019 expressed reasoning behind their selections.<\/jats:p>\n<\/jats:sec>\n\nFindings<\/jats:title>\nEven when provided with the same information about the scenario, participants made different algorithm preference selections and rationalised their selections differently. The study results revealed diversity in participant responses but consistency in the emphasis they placed on normative concerns and the importance of context when accounting for their selections. The issues raised by participants as important to their selections resonate closely with values that have come to the fore in current debates over algorithm prevalence.<\/jats:p>\n<\/jats:sec>\n\nOriginality\/value<\/jats:title>\nThis work developed a novel empirical approach that demonstrates the value in pursuing algorithmic interpretability and transparency while also highlighting the complexities surrounding their accomplishment.<\/jats:p>\n<\/jats:sec>","DOI":"10.1108\/jices-11-2018-0092","type":"journal-article","created":{"date-parts":[[2019,4,9]],"date-time":"2019-04-09T08:44:55Z","timestamp":1554799495000},"page":"210-228","source":"Crossref","is-referenced-by-count":9,"title":["\u201cIt would be pretty immoral to choose a random algorithm\u201d"],"prefix":"10.1108","volume":"17","author":[{"given":"Helena","family":"Webb","sequence":"first","affiliation":[]},{"given":"Menisha","family":"Patel","sequence":"additional","affiliation":[]},{"given":"Michael","family":"Rovatsos","sequence":"additional","affiliation":[]},{"given":"Alan","family":"Davoust","sequence":"additional","affiliation":[]},{"given":"Sofia","family":"Ceppi","sequence":"additional","affiliation":[]},{"given":"Ansgar","family":"Koene","sequence":"additional","affiliation":[]},{"given":"Liz","family":"Dowthwaite","sequence":"additional","affiliation":[]},{"given":"Virginia","family":"Portillo","sequence":"additional","affiliation":[]},{"given":"Marina","family":"Jirotka","sequence":"additional","affiliation":[]},{"given":"Monica","family":"Cano","sequence":"additional","affiliation":[]}],"member":"140","reference":[{"issue":"3","key":"key2019090408142981600_ref001","first-page":"973","article-title":"Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability","volume":"20","year":"2016","journal-title":"New Media and Society"},{"key":"key2019090408142981600_ref002","first-page":"1","article-title":"Fairness in machine learning: lessons from political philosophy","volume-title":"Proceedings of the 1st Conference on Fairness, Accountability and Transparency (Proceedings of Machine Learning Research)","year":"2018"},{"key":"key2019090408142981600_ref003","volume-title":"An Introduction to Discourse Analysis","year":"1977"},{"issue":"3","key":"key2019090408142981600_ref004","doi-asserted-by":"crossref","first-page":"349","DOI":"10.1023\/B:MIND.0000035461.63578.9d","article-title":"On the morality of artificial agents","volume":"14","year":"2004","journal-title":"Minds and Machines"},{"issue":"2","key":"key2019090408142981600_ref005","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1080\/19331681.2018.1448735","article-title":"Algorithms, bots, and political communication in the US 2016 election: the challenge of automated political communication for election law and administration","volume":"15","year":"2018","journal-title":"Journal of Information Technology and Politics"},{"issue":"3","key":"key2019090408142981600_ref006","first-page":"161","article-title":"Privacy concerns arising from internet service personalisation filters","volume":"45","year":"2016","journal-title":"ACM SIGCAS Computers and Society"},{"key":"key2019090408142981600_ref007","doi-asserted-by":"publisher","first-page":"391","DOI":"10.1145\/3091478.3098864","article-title":"Algorithmic fairness in online information mediating systems","volume-title":"Proceedings of the 2017 ACM on Web Science Conference","year":"2017"},{"issue":"2128","key":"key2019090408142981600_ref008","doi-asserted-by":"publisher","DOI":"10.1098\/rsta.2017.0359","article-title":"Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power","volume":"376","year":"2018","journal-title":"Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences"},{"issue":"6","key":"key2019090408142981600_ref009","doi-asserted-by":"crossref","first-page":"751","DOI":"10.1093\/scipol\/scs093","article-title":"Responsible research and innovation: drom science in society to science for society, with society","volume":"39","year":"2012","journal-title":"Science and Public Policy"},{"key":"key2019090408142981600_ref010","volume-title":"The Black Box Society: The Secret Algorithms That Control Money and Information","year":"2015"},{"issue":"1","key":"key2019090408142981600_ref011","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1007\/s10676-017-9430-8","article-title":"Society-in-the-loop: programming the algorithmic social contract","volume":"20","year":"2018","journal-title":"Ethics and Information Technology"},{"key":"key2019090408142981600_ref012","volume-title":"Qualitative Research Practice: A Guide for Social Science Students and Researchers","year":"2003"},{"key":"key2019090408142981600_ref013","unstructured":"SCOTUSblog (2017), \u201cLoomis vs WI\u201d, available at: www.scotusblog.com\/case-files\/cases\/loomis-v-wisconsin\/ (accessed 12\/03\/2018)."},{"key":"key2019090408142981600_ref014","volume-title":"Interpreting Qualitative Data: Methods for Analysing Talk, Text and Interaction","year":"2001"},{"issue":"3","key":"key2019090408142981600_ref015","doi-asserted-by":"crossref","first-page":"309","DOI":"10.1080\/713651562","article-title":"The tyranny of transparency","volume":"26","year":"2000","journal-title":"British Educational Research Journal"},{"issue":"3","key":"key2019090408142981600_ref016","article-title":"Discrimination in online ad delivery","volume":"11","year":"2013","journal-title":"ACM Queue"},{"key":"key2019090408142981600_ref017","volume-title":"Understanding Qualitative Research and Ethnomethodology","year":"2004"}],"container-title":["Journal of Information, Communication and Ethics in Society"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/JICES-11-2018-0092\/full\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/JICES-11-2018-0092\/full\/html","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2019,11,7]],"date-time":"2019-11-07T22:56:43Z","timestamp":1573167403000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/JICES-11-2018-0092\/full\/html"}},"subtitle":["Opening up algorithmic interpretability and transparency"],"short-title":[],"issued":{"date-parts":[[2019,5,13]]},"references-count":17,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2019,5,13]]}},"alternative-id":["10.1108\/JICES-11-2018-0092"],"URL":"https:\/\/doi.org\/10.1108\/jices-11-2018-0092","relation":{},"ISSN":["1477-996X","1477-996X"],"issn-type":[{"value":"1477-996X","type":"print"},{"value":"1477-996X","type":"print"}],"subject":[],"published":{"date-parts":[[2019,5,13]]}}}