Abstract
The ethical or responsible use of Artificial Intelligence (AI) is central to numerous civilian AI governance frameworks and to literature. Not so in defence: only a handful of governments have engaged with ethical questions arising from the development and use of AI in and for defence. This paper fills a critical gap in the AI ethics literature by providing evidence on the perception of ethical AI within a national defence institution. Our qualitative case study analyses how the collective Italian Defence leadership thinks about deploying AI systems and the ethical implications. We interviewed 15 leaders about the impact of AI on the Italian Defence, key ethical challenges, and responsibility for future action. Our findings suggest that Italian Defence leaders are keen to address ethical issues but encounter challenges in developing a system governance approach to implement ethical AI across the organisation. Guidance on risk management and human–machine interaction, applied education, and interdisciplinary research, as well as guidance on AI defence ethics by the European Union are critical elements for Italian Defence leaders as they adapt their organisational processes to the AI-enabled digital transformation.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
The datasets generated and analysed during the current study are not publicly available due privacy reasons and the Chatham House rules. Further information on the research and the results can be obtained from the corresponding author on reasonable request.
Notes
For this paper, we construct ethics broadly on the basis of norms and values, i.e. AI ethics includes questions about what we should or ought to do, but also more general concerns related to the general social, political, and cultural impact as well as risks arising from the use of AI.
The interviewed leaders included 250 CIOs (49%) and CTOs (51%) from Australia, Belgium, Canada, Czech Republic, Denmark, France, Germany, Netherlands, New Zealand, Norway, Poland, the UK, and the US.
The Codice dell’ordinamento militare is a law of the Italian Republic about organisation, functions and norms for the Armed Forces (decreto legislativo 15 marzo 2010, n. 66).
Given the scope of this paper, we do not specifically discuss the question of lethal autonomous weapons systems, which has been covered extensively in literature (see Bode, 2020).
The seven principles are: Human agency and oversight; Technical robustness and safety; Privacy and data governance; Transparency; Diversity, non-discrimination and fairness; Environmental and societal well-being and Accountability.
On European digital sovereignty, see i.a. Floridi, L. (2019). The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU. Philosophy & Technology, 33(3), 369–378. https://doi.org/10.1007/s13347-020-00423-6
The tools include an ethical AI system development checklist, an ethical AI risk matrix, and a guide for contractors to develop a formal Legal, Ethical and Assurance Programme Plan for AI programmes where an ethical risk assessment is above a certain threshold.
AI strategy for the German Armed Forces: https://www.bundeswehr.de/resource/blob/156024/d6ac452e72f77f3cc071184ae34dbf0e/download-positionspapier-deutsche-version-data.pdf; AI Strategy for the French Defence: https://www.defense.gouv.fr/sites/default/files/aid/Report%20of%20the%20AI%20Task%20Force%20September%202019.pdf
Centro Alti Studi per la Difesa: Master in Strategic Leadership & Digital Transformation. (n.d.). Retrieved 28 February 2022, from https://www.difesa.it/SMD_/CASD/Pagine/default.aspx
CASD: Executive Master in Strategic Leadership & Digital Transformation. (n.d.). Retrieved 28 February 2022, from https://www.difesa.it/SMD_/Comunicati/Pagine/CASD_Executive_Master_in_Strategic_Leadership_and_Digital_Transformation.aspx
Because participants were not familiar with academic literature on the topic, two examples of AI Ethics frameworks were added to the annex of the interview questionnaire: the EU AI High-Level Expert Guidelines and general guidelines for the use of AI for national defence purposes by Taddeo et al (2021), reported in Appendix.
See, i.a., the chapter on Expert Interviews and Elite Interviews by Donders and Van Audenhove (2019) in The Palgrave Handbook of Methods for Media Policy Research by van der Bulck et al.
Original quote: “Alla base dell’intelligenza artificiale sono uomini. Se l’uomo progetta un algoritmo, l’intelligenza artificiale ha per esempio, come dire, dei valori differenti, un aspetto di contesto culturale. Faccio un esempio: se un algoritmo viene progettato in un Paese con un background di valori che non sono quelli del mondo occidentale, la macchina potrebbe dare delle decisioni sbagliate […].”.
Original quote: “Dal mondo della ricerca soprattutto nascono questi nuovi trend, conoscerli capirli intanto al nostro interno e poi esportare il dibattito all’esterno. L’ambiente militare da solo non potrà mai essere in grado di gestire la complessità con la quale dovremo confrontarci soprattutto perché le implicazioni etiche saranno decisamente importanti, ci saranno probabilmente all’interno della società delle resistenze molto forti".
Original quote: “L’Unione Europea […] raggiunge una maggiore integrazione rispetto alle creazioni tra missioni militari a civili, una cosa che la NATO non ha. Normalmente, NATO gioca un ruolo importante, ma in questo caso, quando si discuta delle tematiche etiche, potrebbe essere esattamente il contrario, quindi dove l'Unione Europea può giocare un ruolo da protagonista".
Original quote: “Dobbiamo far presto perché se non sappiamo maneggiare questi strumenti, lo strumento, come dire, ci travolgerà".
Original quote: “Bisogna cominciare a formare I tecnici ammesso che non abbiano già per loro natura impostazioni di quel tipo sulla filosofia, cioè gli aspetti morali. […] Qui non serve il nerd, qui serve il nerd filosofo”.
Original quote: “[…] chi si muove prima probabilmente può catalizzare gli altri stati. Cioè se l'Unione Europea, prima degli altri arrivasse a un documento di sintesi europeo, probabilmente potrebbe essere anche un elemento attrattivo verso gli altri […]”.
References
Aghemo, R. (2021, February 26). Report on the responsible use of Artificial Intelligence in Defence (III). Medium. https://medium.datadriveninvestor.com/report-on-the-responsible-use-of-artificial-intelligence-in-defence-iii-6712d8a03a9
Azafrani, R. & Gupta, A. (2023). Bridging the civilian-military divide in responsible AI principles and practices. In: Responsible design & use of AI in the military domain. TU Delft. Retrieved 19 February 2023, from https://www.tudelft.nl/en/2023/tbm/digital-ethics-centre/digital-ethics-centre-organises-academic-forum-reaim-2023
Bode, I. (2020). Weaponised artificial intelligence and use of force norms. The Project Repository Journal, 6(July), 140–143.
De Cremer, D. (2021). With AI entering organizations, responsible leadership may slip! AI and Ethics. https://doi.org/10.1007/s43681-021-00094-9
Defence Science and Technology Group. (2021, January 27). A method for ethical AI in defence. https://www.dst.defence.gov.au/publication/ethical-ai
Devitt, S. K., & Copeland, D. (2021). Australia’s approach to AI governance in security and defence. ArXiv:2112.01252 [Cs]. http://arxiv.org/abs/2112.01252
Donders, K., & Van Audenhove, L. (2019). Talking to people III: Expert interviews and elite interviews. In H. Van den Bulck, M. Puppis, K. Donders, & L. Van Audenhove (Eds.), The Palgrave handbook of methods for media policy research. Springer International Publishing. https://doi.org/10.1007/978-3-030-16065-4
European Commission. (2020, July 17). Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
European Commission. (2021). Proposal for a regulation of the European parliament and of the council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts, (2021) (testimony of European Commission). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
European Parliament. (2018). European parliament resolution of 12 September 2018 on autonomous weapon systems (2018/2752(RSP)). https://www.europarl.europa.eu/doceo/document/TA-8-2018-0341_EN.html
European Parliament. (2021). European Parliament resolution of 20 January 2021 on Artificial Intelligence: Questions of interpretation and application of international law in so far as the EU is affected in the areas of civil and military uses and of state authority outside the scope of criminal justice (2020/2013(INI)). https://www.europarl.europa.eu/doceo/document/TA-9-2021-0009_EN.html
Fanni, R., Steinkogler, V. E., Zampedri, G., & Pierson, J. (2022). Enhancing human agency through redress in Artificial Intelligence Systems. AI & SOCIETY. https://doi.org/10.1007/s00146-022-01454-7
Finland EU Presidency. (2019). Digitalization and Artificial Intelligence in Defence (p. 2). Retrieved 28 February 2022, from https://eu2019.fi/documents/11707387/12748699/Digitalization+and+AI+in+Defence.pdf/151e10fd-c004-c0ca-d86b-07c35b55b9cc/Digitalization+and+AI+in+Defence.pdf
Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1
Green, B. (2021). The contestation of tech ethics: A sociotechnical approach to technology ethics in practice. Journal of Social Computing, 2(3), 209–225. https://doi.org/10.23919/JSC.2021.0018
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines. https://doi.org/10.1007/s11023-020-09517-8
Hsieh, H.-F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. https://doi.org/10.1177/1049732305276687
Independent High-Level Expert Group on AI (AI HLEG). (2019). Ethics guidelines for trustworthy AI. European Commission. https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
Italian Defence General Staff. (2021). Future scenarios concept: Trends and implications for security and defence. https://www.difesa.it/SMD_/Staff/Sottocapo/UGID/Dottrina/Documents/Future_Scenarios_Concept.pdf
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Johnson, J. (2023). The AI commander problem: Ethical, political, and psychological dilemmas of human-machine interactions in AI-enabled warfare. Journal of Military Ethics, 0(0), 1–26. https://doi.org/10.1080/15027570.2023.2175887
Kuckartz, U. (2019). Qualitative text analysis: A systematic approach. In G. Kaiser & N. Presmeg (Eds.), Compendium for Early Career Researchers in Mathematics Education (pp. 181–197). Springer International Publishing. https://doi.org/10.1007/978-3-030-15636-7_8
Littig, B., & Pöchhacker, F. (2014). Socio-translational collaboration in qualitative inquiry: The case of expert interviews. Qualitative Inquiry, 20(9), 1085–1095. https://doi.org/10.1177/1077800414543696
Macchiarini Crosson, D. & Blockmans, S. (2022). The Five ‘I’s of EU defence. Center for European Policy Studies. https://www.ceps.eu/ceps-publications/the-five-is-of-eu-defence/
Ministry of Defence. (n.d.). Defence Artificial Intelligence Strategy. GOV.UK. Retrieved 20 November 2022, from https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), Article 11. https://doi.org/10.1038/s42256-019-0114-4
Morgan, F. E., Boudreaux, B., Lohn, A. J., Ashby, M., Curriden, C., Klima, K., & Grossman, D. (2020). Military applications of Artificial Intelligence: Ethical concerns in an uncertain world. RAND PROJECT AIR FORCE, SANTA MONICA CA SANTA MONICA, United States.
Muti, K. (2021). Stronger Together—Italy: A Lame Workhorse in the European Security and Defense Race | Institut Montaigne. https://www.institutmontaigne.org/en/blog/stronger-together-italy-lame-workhorse-european-security-and-defense-race
NATO Review—An Artificial Intelligence strategy for NATO. (2021, October 25). NATO Review. https://www.nato.int/docu/review/articles/2021/10/25/an-artificial-intelligence-strategy-for-nato/index.html
O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19, 1609406919899220. https://doi.org/10.1177/1609406919899220
Petit, N., & De Cooman, J. (2020). Models of law and regulation for AI (SSRN Scholarly Paper ID 3706771). Social Science Research Network. https://doi.org/10.2139/ssrn.3706771
Peukert, C., & Kloker, S. (2020). Trustworthy AI: How ethics washing undermines consumer trust. In WI2020 Zentrale Tracks (pp. 1100–1115). GITO Verlag. https://doi.org/10.30844/wi_2020_j11-peukert
Roberts, H., Cowls, J., Hine, E., Mazzi, F., Tsamados, A., Taddeo, M., & Floridi, L. (2021). Achieving a ‘good AI society’: Comparing the aims and progress of the EU and the US. Science and Engineering Ethics, 27(6), 68. https://doi.org/10.1007/s11948-021-00340-7
Ryan, M. (2020). In AI we trust: Ethics, Artificial Intelligence, and reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/s11948-020-00228-y
Sanderson, C., Douglas, D., Lu, Q., Schleiger, E., Whittle, J., Lacey, J., Newnham, G., Hajkowicz, S., Robinson, C., & Hansen, D. (2022). AI ethics principles in practice: Perspectives of designers and developers (arXiv:2112.07467). arXiv. http://arxiv.org/abs/2112.07467
Seawright, J., & Gerring, J. (2008). Case selection techniques in case study research: A menu of qualitative and quantitative options. Political Research Quarterly, 61(2), 294–308. https://doi.org/10.1177/1065912907313077
Schmid, S., Riebe, T., & Reuter, C. (2022). Dual-Use and Trustworthy? A mixed methods analysis of AI diffusion between civilian and defense R&D. Science and Engineering Ethics, 28(2), 12. https://doi.org/10.1007/s11948-022-00364-7
Stockholm International Peace Research Institute (SIPRI), Yearbook: Armaments, disarmament and international security. (2021). Military expenditure (current USD)—European Union | Data. https://data.worldbank.org/indicator/MS.MIL.XPND.CD?locations=EU
Taddeo, M., McNeish, D., Blanchard, A., & Edgar, E. (2021). Ethical principles for Artificial Intelligence in national defence. Philosophy & Technology, 34(4), 1707–1729. https://doi.org/10.1007/s13347-021-00482-3
Taylor, T. (2019). Artificial Intelligence in defence. The RUSI Journal, 164(5–6), 72–81. https://doi.org/10.1080/03071847.2019.1694229
Ufficio Generale Innovazione Difesa. (2022). L’impatto delle Emerging & Disruptive Technologies (EDTs) sulla Difesa. https://www.difesa.it/SMD_/Staff/Sottocapo/UGID/Pagine/Centro_Innovazione_Difesa.aspx
U.S. Department of Defense. (2020). United States: DOD adopts ethical principles for Artificial Intelligence. Retrieved 28 February 2022, from https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/
Wasilow, S., & Thorpe, J. B. (2019). Artificial Intelligence, robotics, ethics, and the military: A Canadian perspective. AI Magazine, 40(1), 37–48.
Winfield, A. F. T., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and Artificial Intelligence systems. Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences, 376(2133), 20180085. https://doi.org/10.1098/rsta.2018.0085
Yin, R. K. (2015). Qualitative research from start to finish. Guilford publications.
Acknowledgements
The authors thank the interviewees for their time and commitment to participate in the study. The authors thank the Centro Alti Studi per la Difesa for their administrative support. The authors also thank five anonymous reviewers for their intellectual and editing contributions.
Author information
Authors and Affiliations
Contributions
Both (All) authors have contributed equally to the work.
Corresponding author
Ethics declarations
Conflict of Interest
Fernando Giancotti is a former President of the Centro Alti Studi per la Difesa (CASD) and is member of the CASD PhD board and receives no compensation as member of the board.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Fanni, R., Giancotti, F. Ethical Artificial Intelligence in the Italian Defence: a Case Study. DISO 2, 29 (2023). https://doi.org/10.1007/s44206-023-00056-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44206-023-00056-0