Abstract
[Motivation] Artificial intelligence (AI) creates many opportunities for public institutions, but the unethical use of AI in public services can reduce citizens’ trust. [Question] The aim of this study was to identify what kind of requirements citizens have for trustworthy AI services in the public sector. The study included 21 interviews and a design workshop of four public AI services. [Results] The main finding was that all the participants wanted public AI services to be transparent. This transparency requirement covers a number of questions that trustworthy AI services must answer, such as about their purposes. The participants also asked about the data used in AI services and from what sources the data were collected. They pointed out that AI must provide easy-to-understand explanations. We also distinguished two other important requirements: controlling personal data usage and involving humans in AI services. [Contribution] For practitioners, the paper provides a list of questions that trustworthy public AI services should answer. For the research community, it illuminates the transparency requirement of AI systems from the perspective of citizens.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Fast, E., Horvitz, E.: Long-term trends in the public perception of artificial intelligence. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp. 963–969 (2010)
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., Floridi, L.: Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach. Sci. Eng. Ethics 24, 505–528 (2017)
Mehr, H.: Artificial Intelligence for Citizen Services and Government. Harvard Ash Center Technology & Democracy (2017)
AI HLEG, Policy and investment recommendations for trustworthy AI. European Commission (2019)
AI Now Institute: AI Now Report 2018 (2018)
AI Now Institute: Automated Decision Systems Examples of Government Use Cases (2019)
New York City’s algorithm task force is fracturing. https://www.theverge.com/2019/4/15/18309437/new-york-city-accountability-task-force-law-algorithm-transparency-automation. Accessed 6 Nov 2020
Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(2), 389–399 (2019)
A Consortium of Finnish organisations seeks for a shared way to proactively inform citizens on AI use. https://www.espoo.fi/en-US/A_Consortium_of_Finnish_organisations_se(167195). Accessed 6 Nov 2020
Amershi, S., et al.: Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–13 (2019)
Rzepka, C., Berger, B.: User interaction with AI-enabled systems: a Systematic review of IS research. In: Proceedings of the 39th International Conference on Information Systems, pp. 13–16 (2018)
Leslie, D.: Understanding artificial intelligence ethics and safety: a guide for the responsible design and implementation of AI systems in the public sector. Alan Turing Institute (2019)
Carter, N., Bryant-Lukosius, D., Dicenso, A., Blythe, J., Neville, A.: The use of triangulation in qualitative research. Oncol. Nurs. Forum 41(5), 545–547 (2014)
Kaplowitz, M., Hoehn, J.: Do focus groups and individual interviews reveal the same information for natural resource valuation? Ecol. Econ. 36(2), 237–247 (2001)
Schlosser, C., Jones, S., Maiden, N.: Using a creativity workshop to generate requirements for an event database application. In: Paech, B., Rolland, C. (eds.) REFSQ 2008. LNCS, vol. 5025, pp. 109–122. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69062-7_10
Drobotowicz, K: Guidelines for designing trustworthy AI services in the public sector. Master’s thesis, Aalto University, Department of Computer Science (2020). http://urn.fi/URN:NBN:fi:aalto-202008235015
DiCicco-Bloom, B., Crabtree, B.: The qualitative research interview. Med. Educ. 4(4), 314–321 (2006)
Kitzinger, J.: Qualitative research: introducing focus groups. BMJ 311(7000), 299–302 (1995)
Michanek, J., Breiler, A.: The Idea Agent: The Handbook on Creative Processes, 2nd edn. Routledge, Abingdon (2013)
Lazar, J., Feng, J., Hochheiser, H.: Research Methods in Human-Computer Interaction, 2nd edn. Morgan Kaufmann, Burlington (2017)
Charmaz, K., Hochheiser, H.: Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis, Thousand Oaks (2006)
Turilli, M., Floridi, L.: The ethics of information transparency. Ethics Inf. Technol. 11(2), 105–112 (2009). https://doi.org/10.1007/s10676-009-9187-9
Hosseini, M., Shahri, A., Phalp, K., Ali, R.: Foundations for transparency requirements engineering. In: Daneva, M., Pastor, O. (eds.) REFSQ 2016. LNCS, vol. 9619, pp. 225–231. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-30282-9_15
Chazette, L., Karras, O., Schneider, K.: Do End-Users want explanations? Analyzing the role of explainability as an emerging aspect of non-functional requirements. In: Proceedings of the IEEE International Conference on Requirements Engineering, pp. 223–233 (2019)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Chazette, L., Schneider, K.: Explainability as a non-functional requirement: challenges and recommendations. Requirements Eng. 25(4), 493–514 (2020). https://doi.org/10.1007/s00766-020-00333-1
Schaefer, K., Chen, J., Szalma, J., Hancock, P.: A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors 58(3), 377–400 (2016)
Lee, J., See, K.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)
Leading the way into the age of artificial intelligence Final report of Finland’s Artificial Intelligence Programme 2019. Publications of the Ministry of Economic Affairs and Employment (2019)
Koski, O.: Work in the age of artificial intelligence: Four perspectives on the economy, employment, skills and ethics. Publications of the Ministry of Economic Affairs and Employment of Finland (2018)
Artificial Intelligence From Finland, e-Book of Business Finland (2020). https://www.magnetcloud1.eu/b/businessfinland/AI_From_Finland_eBook/
Acknowledgements
We thank the Saidot team from spring 2019 for starting the project and assisting in the data collection of this study, J. Mattila for co-organizing and conducting parts of the interviews, and our participants for sharing their experiences.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Drobotowicz, K., Kauppinen, M., Kujala, S. (2021). Trustworthy AI Services in the Public Sector: What Are Citizens Saying About It?. In: Dalpiaz, F., Spoletini, P. (eds) Requirements Engineering: Foundation for Software Quality. REFSQ 2021. Lecture Notes in Computer Science(), vol 12685. Springer, Cham. https://doi.org/10.1007/978-3-030-73128-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-73128-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-73127-4
Online ISBN: 978-3-030-73128-1
eBook Packages: Computer ScienceComputer Science (R0)