AI systems for the public interest
Abstract
AI systems have been promised to reduce CO2 emissions, monitor biodiversity, support accessibility, or help analyse human rights violations. They are often seen as a crucial part of the solutions needed in our times ranging from addressing the climate crisis, public health, to improvement of social services, or urban planning. We find the reference to artificial intelligence (AI) in many documents and debates of the policy realm, assigning it a strong potential to contribute to all these domains. AI for the public interest, and its close relatives, AI for (common or social) good, have become a common theme not only for tech companies, but also for political actors in the EU, including for instance international NGO networks. However, most often the definition of the public interest in the best case is limited to references to AI ethics. Yet, the practical meaning of what a good use of AI and a purpose “for good” entails in its development and implementation is unclear. What is often missing is an understanding that spells out in practice what it means for the process of development and deployment of AI systems to serve the public interest, let alone a holistic view on the conditions for AI to best serve the collective well-being.Papers in this special issue
-
Introduction to the special issue on AI systems for the public interest
Theresa Züger, Alexander von Humboldt Institute for Internet and Society, Berlin, Germany, theresa.zueger@hiig.de
Hadi Asghari, Alexander von Humboldt Institute for Internet and Society, Berlin, Germany, hadi.asghari@hiig.de
-
Contesting the public interest in AI governance
Tegan Cohen, Queensland University of Technology (QUT)
Nicolas P. Suzor, Queensland University of Technology (QUT)
-
Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems
Michael Gille, Hamburg University of Applied Sciences
Marina Tropmann-Frick, Hamburg University of Applied Sciences
Thorben Schomacker, Hamburg University of Applied Sciences
-
Restricting access to AI decision-making in the public interest: The justificatory role of proportionality and its balancing factors
Margaret Warthon, University of Groningen
-
Navigating data governance risks: Facial recognition in law enforcement under EU legislation
Gizem Gültekin-Várkonyi, University of Szeged
-
Public value in the making of automated and datafied welfare futures
Doris Allhutter, Austrian Academy of Sciences
Anila Alushi, University of Leipzig
Rafaela Cavalcanti de Alcântara, Austrian Academy of Sciences
Maris Männiste, Södertörn University
Christian Pentzold, University of Leipzig
Sebastian Sosnowski, Polish Academy of Sciences
-
Balancing efficiency and public interest: The impact of AI automation on social benefit provision in Brazil
Maria Alejandra Nicolás, Federal University of Latin American Integration
Rafael Cardoso Sampaio, Federal University of Paraná
-
Misguided: AI regulation needs a shift in focus
Agathe Balayn, Delft University of Technology (TU Delft)
Seda Gürses, Delft University of Technology (TU Delft)
-
On the (im)possibility of sustainable artificial intelligence
Rainer Rehak, Weizenbaum Institute for the Networked Society
-
Interview with Katharina Meyer: On the tension between public interest and profit maximisation in public interest tech
Theresa Züger, Alexander von Humboldt Institute for Internet and Society
-
Interview with Friederike Rohde: The environmental impact of AI as a public interest concern
Theresa Züger, Alexander von Humboldt Institute for Internet and Society
-
Interview with Ulrike Klinger and Philipp Hacker: Why the public interest gets lost in the AI gold rush
Theresa Züger, Alexander von Humboldt Institute for Internet and Society
Papers in this special issue
-
Introduction to the special issue on AI systems for the public interest
Theresa Züger, Alexander von Humboldt Institute for Internet and Society, Berlin, Germany, theresa.zueger@hiig.de
Hadi Asghari, Alexander von Humboldt Institute for Internet and Society, Berlin, Germany, hadi.asghari@hiig.de
-
Contesting the public interest in AI governance
Tegan Cohen, Queensland University of Technology (QUT)
Nicolas P. Suzor, Queensland University of Technology (QUT) -
Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk AI systems
Michael Gille, Hamburg University of Applied Science
Marina Tropmann-Frick, Hamburg University of Applied Science
Thorben Schomacker, Hamburg University of Applied Science -
Restricting access to AI decision-making in the public interest: The justificatory role of proportionality and its balancing factors
Margaret Warthon, University of Groningen
-
Navigating data governance risks: Facial recognition in law enforcement under EU legislation
Gizem Gültekin-Várkonyi, University of Szeged
-
Public value in the making of automated and datafied welfare futures
Doris Allhutter, Austrian Academy of Sciences
Anila Alushi, Leipzig University
Rafaela Cavalcanti de Alcântara, Austrian Academy of Sciences
Maris Männiste, Södertörn University
Christian Pentzold, University of Leipzig
Sebastian Sosnowski, Polish Academy of Sciences -
Balancing efficiency and public interest: The impact of AI automation on social benefit provision in Brazil
Maria Alejandra Nicolás, Federal University of Latin American Integration
Rafael Cardoso Sampaio, Federal University of Paraná -
Misguided: AI regulation needs a shift in focus
Agathe Balayn, Delft University of Technology (TU Delft)
Seda Gürses, Delft University of Technology (TU Delft)
-
On the (im)possibility of sustainable artificial intelligence
Rainer Rehak, Weizenbaum Institute for the Networked Society
-
Interview with Katharina Meyer: On the tension between public interest and profit maximisation in public interest tech
Theresa Züger, Alexander von Humboldt Institute for Internet and Society
-
Interview with Friederike Rohde: The environmental impact of AI as a public interest concern
Theresa Züger, Alexander von Humboldt Institute for Internet and Society
-
Interview with Ulrike Klinger and Philipp Hacker: Why the public interest gets lost in the AI gold rush
Theresa Züger, Alexander von Humboldt Institute for Internet and Society
Introduction to the special issue on AI systems for the public interest
AI systems have been promised to reduce CO2 emissions, monitor biodiversity, support accessibility, or help analyse human rights violations. They are often seen as a crucial part of the solutions needed in our times ranging from addressing the climate crisis, public health, to improvement of social services, or urban planning. We find the reference to artificial intelligence (AI) in many documents and debates of the policy realm, assigning it a strong potential to contribute to all these domains. AI for the public interest, and its close relatives, AI for (common or social) good, have become a common theme not only for tech companies, but also for political actors in the EU, including for instance international NGO networks. However, most often the definition of the public interest in the best case is limited to references to AI ethics. Yet, the practical meaning of what a good use of AI and a purpose “for good” entails in its development and implementation is unclear. What is often missing is an understanding that spells out in practice what it means for the process of development and deployment of AI systems to serve the public interest, let alone a holistic view on the conditions for AI to best serve the collective well-being.
In our research over the past four years in the interdisciplinary research group “Public Interest AI” at the Alexander von Humboldt Institute for Internet and Society, we have focussed our work on defining this concept in more detail, grounding it in existing political and legal theory and specifically addressing the democratic governance and conditions for the development of AI in the public interest. The framework we propose consists of five plus one criteria, namely (1) public justification for the AI system, (2) an emphasis on equality, (3) a deliberation/ co-design process, (4) technical safeguards, and (5) openness to validation (see Züger & Asghari, 2023). In the course of our work we decided to include sustainability as a sixth criterion, as it is deeply connected to the long-term survival and well-being of humans. We developed these criteria by exploring the existing principles for public interest in general and then transferring them to the process of AI development and use.1
Starting from a basic widespread agreement, the concept of “public” interest often appears to be the other side of “individual”, “private”, or “group” interests. It often relates to goals and virtues other than profit and market activity, such as “the outcomes best serving the long-run survival and well-being of a social collective construed as a ‘public’” (Bozeman, 2007, p. 12). Different approaches to determining the public interest exist, which have been described as the “normative”, “consensualist”, and “process-based” approaches. Bozeman, similarly to other political and legal theorists, has concluded that the public interest can never be determined universally, but is rather situation dependent and dynamic. “What is ‘in the public interest’ changes not only case-by-case, but also within the same case as time advances and conditions change” (Bozeman, 2007, p. 13). This also holds true for public interest AI, where a judgement needs to be made based on changing conditions. Also, following Bozeman, this judgement requires entry points for deliberation, as for instance meaningful transparency. Since public interest needs the negotiation of citizens not as private people put as part of a collective public, exchanging views and experiences to collaboratively determine which solutions best serve the public, the process of AI development needs these entry points to start from actual problems. It also needs to allow contestation and improve development by an inclusive approach that aims to cater to the needs of those affected.
As editors of this special issue, we called for contributions to the debate around AI systems in the public interest. We invited interdisciplinary work that connects public interest theory (Bozeman, 2007; Feintuck, 2004; Held, 1970; Wikimedia Deutschland, 2023; The Public AI Network, 2024) to the discourse about AI projects, and also work that reflects on the outcomes of projects and systems that are using AI to serve the public interest. Connecting to the focus of the Internet Policy Review journal, we were particularly interested in exploring how regulation and policies affect the conditions for public interest AI to be developed and maintained, and how such systems can be evaluated.
The debate on public interest AI is still a young and emerging one. It cannot simply refer back to a shared corpus of foundational texts or agreed upon definitions of what the public interest is. Instead, we see this special issue as a way to help establish this field and its community by bringing together positions and approaches complementary to the perspective defined above. Throughout the process of assembling the contributions for this special issue, we made some observations about the state of this young discourse, which we want to share before introducing the contributions individually.
Observations on the state of the discourse
We see the aim of the discourse about public interest AI as strengthening the connection between theory and practice, meaning to help bring norms into practice, and to connect the increasing number of AI projects with societal goals to the theoretical background that has existed for many years. However, in our research project we have observed several obstacles to creating this connection.
First, many existing research projects and proclaimed AI for good systems lack reflection on the public interest theory and have no explicit explanation on why the said project is assumed to be in the public interest. It seems that often a “gut feeling” of making some kind of societal contribution was seen as sufficient by project developers. Often the difficulty of providing a universally shared definition is used as a reason not to give any definition or criteria at all. From a societal perspective, that is a problematic strategy, since it leaves room for any project or system to claim to be in the public interest without giving reasons for this claim, thereby hindering a serious discussion or proper contestation.
Another obstacle stems from the fact that many AI projects that aim to be in the public interest remain in the early research (or a prototype) stage. While such projects are valuable contributions in and of themselves, the question of how to bring them to the mainstream and practical use remains challenging. The incentives and organisation of scientific funding add to the challenge, with public funds often being limited to several years and after this period it is often expected for the market to take over.
Also, the connection between research around the ethics of AI, which elaborates on important values, mostly leaves a gap in the implementation and also often focuses on individual action rather than on the collective/public interest. We believe that discourse focussed on the public interest can contribute to better intertwining ethics with actual AI development and implementation.
A last observation is that the discourse around public interest AI is happening with a disciplinary disconnect in the field of computer science, law, political science, and finally in the application space for instance by academic research projects, NGO-practitioners, or public administration. By disciplinary disconnect, we mean that relevant work is done in all these disciplines but they have no shared discourse allowing others to add their perspective. Each discipline sets different priorities in their approach, which at least partly ignores the work that other disciplines contribute.
While bridging these gaps in the debate will require more effort, we see our special issue as a contribution to the field, more particularly of the policy field. Being a policy-focussed publication, most papers address public interest AI from a policy and governance angle. Taken together, the special issue thus presents an interdisciplinary collection of six research papers, three expert interviews, and two invited opinion pieces. These different formats fit the explorative character that represents that state of play in this field.
On the contributions to this special issue
Focussing on public interest AI from a policy angle, the selection of the special issue shows that this debate is multi-sided: first, it can focus on applications that are at their core designed for a public interest purpose (which is a niche in AI development), or it can address the mainstream of AI development and ask how its implementation and governance affects the public interest, discussing public policies or legal aspects. While our own research is more focussed on the first, we are in this special issue mainly presenting the second part of the debate, which is a reflection of the contributions we received. Several of the articles in this special issue reflect how the AI Act and parts of the General Data Protection Regulation (GDPR), as key regulatory solutions related to AI governance, interact with the public interest. Other contributions examine two contentious applications of AI in public administration, namely the use of facial recognition in law enforcement and the automation of welfare. They show that the way they are implemented supports a public interest mission which cannot be taken for granted.
The paper by Tegan Cohen and Nicolas P. Suzor titled “Contesting the public interest AI in governance” addresses how the governance of AI needs to be aligned more thoroughly with the public interest by arguing that public contestability is a critical attribute of such governance arrangements. The authors propose mechanisms to collectively contest decisions which do not track public interests as an important guardrail against erroneous, exclusionary, and arbitrary decision-making. This, however, requires capabilities for public contestation outside aggregative and deliberative processes. Drawing from democratic and regulatory theory the piece explores three underlying requirements for public contestability in AI governance: (1) capabilities to organise; (2) separation of powers; and (3) access to alternative and independent information.
The paper by Michael Gille, Marina Tropmann-Frick, and Thorben Schomacker called “Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk” examines the European Union's approach to regulating non high-risk systems established by the AI Act. Based on a doctrinal legal reconstruction of the rules for codes of conduct and considering the EU's stated goal of achieving a market-oriented balance between innovation, fundamental rights, and public interest, this paper explores three different perspectives. It starts with an analysis of specific regulatory components of the governance mechanism, followed by a reflection on ethics and trustworthiness implications of the EU’s approach, and concludes with a case study analysis of AI application for assistive purposes.
The contribution by Margaret Warthon titled “Restricting access to AI decision-making in the public interest: The justificatory role of proportionality and its balancing factors” addressed another important issue, which is the transparency of AI, allowing citizens to understand the “logic involved” behind decision-making processes using AI in any public administration or service, as the General Data Protection Regulation (GDPR) requires. While this transparency is clearly in the public interest, public officials may justify restricting such access by invoking other public interest reasons, aiming to prevent system manipulation and maintaining government efficiency. The author suggests that the principle of proportionality can serve as a mechanism to address this tension, providing measures to mitigate and justify the burden imposed by public interest restrictions on data subjects.
The paper “Navigating data governance risks: Facial recognition in law enforcement under EU legislation” by Gizem Gültekin-Várkonyi reflects on the risks and potential of facial recognition for law enforcement in relation to the public interest. The author identifies four specific risks associated with the use of facial recognition technology by law enforcement agencies for public security within the frameworks of the GDPR and the AI Act. These risks particularly concern compliance with fundamental data protection principles, namely data minimisation, purpose limitation, data and system accuracy, and administrative challenges. The contribution argues for measures to broaden privacy impact assessments, enhance transparency, and thus ensure this AI-driven technology is used for public security in a manner that serves the public interest.
The paper by Doris Allhutter, Anila Alushi Rafaela Cavalcanti de Alcântara, Maris Männiste, Christian Pentzold and Sebastian Sosnowski named “Public value in the making of automated and datafied welfare futures” looks at initiatives within the public sector across Europe that implement data-driven decision-making to enhance the administration of welfare. The paper recognises the growing body of research highlighting the impact and harms these systems have on citizen’s lives, which raises questions about the guiding ideas, values, and norms that are at the heart of current transformations in welfare. The authors explore viable concepts that can help draw normative conclusions about automated and datafied welfare. In particular, they take the “capability approach” and “buen vivir” as sources of inspiration to explore the conditions and procedures established by emerging welfare infrastructures meant to serve citizens, while also fostering the overall well-being and prosperity of society.
The final paper looks at the same issue of AI and the welfare system but from a Brazilian perspective. The paper called “Balancing efficiency and public interest: The impact of AI automation on social benefit provision in Brazil” by Maria Alejandra Nicolás and Rafael Cardoso Sampaio examines the implementation of AI systems by Brazil's National Social Security Institute (INSS) to automate the granting of social benefits. Using audit reports from government agencies, the paper explores the effects of this implementation, such as the efficiency improvements as well as the unintended consequences of this automation. Automatic denials and the creation of barriers for less digitally literate users, disproportionately affecting the most vulnerable populations, are discussed. Drawing from public interest considerations, the authors point to the need for transparency, public justification, adequate risk monitoring tools, governance design, and participation in the implementation of these systems to ensure that they serve the public interest and promote equity. The paper argues that without proper regulation and consideration of ethical principles, AI automation could exacerbate inequalities and undermine trust in public services.
Opinion pieces
The debate about public interest AI is not just theoretical. It can have an impact on outcomes, political strategies, and industrial developments. Thus, our special issue includes two invited opinion pieces focussing on two critical political questions around AI for the public interest: whether the focus of current AI regulation is appropriate considering the public interest, and whether sustainable AI is possible.
The op-ed titled “Misguided: AI regulation needs a shift in focus” written by Agathe Balayn and Seda Gürses argues that to serve the public interest, AI development as well as regulatory frameworks need to shift course immensely. In the opinion of the authors, the conditions of AI development are most crucial, and right now these conditions hinder the public interest. AI-based services are produced in agile production environments that are decades in the making and concentrated in the hands of a few companies. The article gives an overview of the socio-technical as well as political-economic concerns these environments raise, and argues why they may be a better target for policy and regulatory interventions.
The article by Rainer Rehak titled “The (im)possibility of sustainable artificial intelligence” questions the status quo and advocates for a change of course regarding the production and use of AI to align it better with the public interest. The author raises the provoking hypothesis that sustainable AI under today's conditions cannot exist. His argument points to AI development being a problem rather than a solution to the climate crisis, unlike what mainstream narratives around AI promise.
The two opinion pieces take a strong stance in the debate and we support their effort in developing new ideas. As editors of this special issue, we are very happy to include these opinions since they represent a helpful and legitimate stance in the debate around AI in the public interest, although we may be personally somewhat more optimistic about positive use cases of AI in the public interest We consciously invited these critical views as they enlarge the horizon of viewpoints necessary for a diverse exchange on this topic.
Expert interviews
Accompanying the previous eight contributions are three interviews which we conducted with experts, each representing a valuable perspective that further widens the horizon. We approached these experts with overarching questions regarding three issues which we believe are omnipresent in the debate on AI in the public interest.
We spoke with Katharina Meyer, who is by training a historian of technology and science, and the Director of the Digital Infrastructure Insights Fund, a global initiative that provides a platform for academics and practitioners to better understand how open digital infrastructures are built and deployed. In her work, she has gained much experience regarding the tension between public interest and profit maximisation in public interest technology, which was the topic of our conversation.
We spoke with Friederike Rohde about the environmental impact of AI as a public interest concern. Rohde works at the Institute for Ecological Economy Research (IÖW) as a sociologist with a specialisation in technical sciences and has many years of experience with the ecological impacts of computing technology. Part of her efforts was the SustAIn project through which her team and her developed a sustainability index for AI.
Lastly, we spoke with Ulrike Klinger and Philipp Hacker who shared their perspective on discerning the public interest in the AI gold rush. Ulrike Klinger is Professor for Political Theory and Digital Democracy at the European New School for Digital Studies (ENS) in Frankfurt (Oder), while Philipp Hacker is a Professor for Law and Ethics of the Digital Society at the European University Viadrina in Frankfurt (Oder). Klinger and Hacker highlight the risk of public interest AI simply becoming a marketing label, despite its potential, due to the inherent misalignment between for-profit goals and public interest aspirations. They are optimistic that the EU's AI Act is a step in the right direction, and suggest a number of additional steps to be considered.
Conclusions
We started this editorial by naming six criteria for public interest AI from our recent research project, and additionally identifying several gaps in the current discourse on the topic. The research contributions to this special issue expanded on our earlier research in two major ways, first by engaging more directly with the legal debate, and second by elaborating the justification and validation criteria in some of the most contested cases.
The opinion pieces took the issues of justification and sustainability very critically and believe that the current practice of AI is not in the public interest. While the expert interviewees were somewhat more positive, they provided additional considerations regarding funding and democratic considerations.
Overall the arguments show that in the current state of affairs public interest is not well reflected in the regulation of AI nor in the implementation of AI in public services. While the public interest is often implicitly considered and cared for in what might be considered a renaissance of the concept, it is usually not explicitly discussed. This might in part be due to a fear of sharing transparently and clearly the internal thinking that happens around technological decisions in the public sector. The latter might be related to the broader risk-aversion among public officials (Mazzucato & Collington, 2023). This is a problem because society could learn not only from successful projects and their stories, but also from the failures, missteps and probably most of all from myriad considerations and trade-offs in between intent and outcome. Building public interest AI is a societal learning process and without opening up this (heuristic) process for discussion and reflection, actual collective learning cannot take place.
We would like to end by thanking all the authors and contributors, as well as the numerous reviewers, and the Internet Policy Review’s editorial assistant Martha Crowe and its managing editor, Frédéric Dubois, for helping shape this special issue and contributing their insights to this important topic of our times.
References
Allhutter, D., Alushi, A., Alcântara, R., Männiste, M., Pentzold, C., & Sosnowski, S. (2024). Public value in the making of automated and datafied welfare futures. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1803
Balayn, A., & Gürses, S. (2024). Misguided: AI regulation needs a shift in focus. Internet Policy Review, 13(3). https://policyreview.info/articles/news/misguided-ai-regulation-needs-shift/1796
Bozeman, B. (2007). Public values and public interest: Counterbalancing economic individualism. Georgetown University Press. http://www.jstor.org/stable/j.ctt2tt37c
Cohen, T., & Suzor, N. P. (2024). Contesting the public interest: AI in governance. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1794
Feintuck, M. (2004). 'The public interest’ in regulation. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199269020.001.0001
Gille, M., Tropmann-Frick, M., & Schomacker, T. (2024). Balancing public interest, fundamental rights, and innovation: The EU’s governance model for non-high-risk. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1797
Gültekin-Várkonyi, G. (2024). Navigating data governance risks: Facial recognition in law enforcement under EU legislation. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1798
Held, V. (1970). The public interest and individual interests. Basic Books.
Mazzucato, M., & Collington, R. (2023). The big con: How the consulting industry weakens our businesses, infantilizes our governments, and warps our economies. Penguin Publishing Group.
Nicolás, M. A., & Sampaio, R. C. (2024). Balancing efficiency and public interest: The impact of AI automation on social benefit provision in Brazil. Internet Policy Review, 13(3).
Rehak, R. (2024). The (im)possibility of sustainable artificial intelligence. Internet Policy Review, 13(3).
The Public AI Network. (2024). Public AI. A new approach to public-interest AI investment [White paper]. https://bit.ly/publicAIpaper
Warthon, M. (2024). Restricting access to AI decision-making in the public interest: The justificatory role of proportionality and its balancing factors. Internet Policy Review, 13(3). https://doi.org/10.14763/2024.3.1801
Wikimedia Deutschland. (2023). Eight requirements: Making digital policy serve the public interest [Policy paper]. https://upload.wikimedia.org/wikipedia/commons/a/a8/Brochure_Eight_requirements._Making_digital_policy_serve_the_public_interest.pdf
Züger, T., & Asghari, H. (2023). AI for the public. How public interest theory shifts the discourse on AI. AI & Society, 38(2), 815–828. https://doi.org/10.1007/s00146-022-01480-5
Züger, T., Kuper, F., Fassbender, J., Katzy-Reinshagen, A., & Kühnlein, I. (2023). Handling the hype: Implications of AI hype for public interest tech projects. TATuP - Zeitschrift Für Technikfolgenabschätzung in Theorie Und Praxis, 32(3), 34–40. https://doi.org/10.14512/tatup.32.3.34
Footnotes
1. Our work encompasses theoretical contributions (Zueger & Asghari, 2023; Zueger et al., 2023) as well as a practitioner’s perspective, since we realised natural language processing (NLP) projects (see www.publicinterest.ai)
Add new comment