Till innehåll på sidan

Ethics of AI Use in the Public Sector

The ethics of AI in the public sector covers a broad range of issues of privacy and security, transparency, legitimacy and fairness. With AI and automated decision-making being implemented as part of the decision-making process on local level, there has been a rise in interest regarding the consequences of such new systems.

Date: June 9th 2021, on Zoom

Time: 09.00-16.00, schedule (pdf 182 kB)  

Bild

This workshop will feature talks on the topics of governance of artificial intelligence, political implications of ethics in the discussion of AI use, an overview of AI’s adoption as well as on the ethics of automated criminal sentencing, conflicting norms and values in the case of Project Maven and corporeal ethics in interaction design practice, among others.The workshop has been co-funded by the KTH Digitalization platform.

Registration

Register here

The workshop is free and open for all. A Zoom-link will be sent out a couple of days prior to the workshop.

Speakers

Abstracts

Emma Engström & Jennifer Viberg Johansson

An overview of AI’s adoption in the public sector in Europe so far and some of the ethical questions it raises

This presentation will relate to preliminary findings within the project Predicting the diffusion of AI-applications (WASP-HS), which seeks to identify the key mechanisms underlying AI‘s spread within different demographics and sectors. This project examines whether it is possible and useful to describe the adoption of AI-applications in terms of key characteristics identified in previous research within diffusion of innovations. Further, it investigates the features of the AIs that have spread particularly fast. Here, we consider AI’s adoption in the public sector in Europe thus far based on a review of the latest literature, as well as the ethically relevant questions it raises, such as: Who decides whether to adopt AI? Who controls the algorithms? Do different stakeholders know when they are exposed to AI-driven decision-environments? We also examine whether we can learn from previous studies of AI’s spread in the consumer domain to identify some ethical questions related to its adoption in the public sector.

Barbro Fröding

Ethical and social impacts of disruptive technologies in the Public Sector

I will talk about an EU project - ETAPAS - that I am involved in and introduce two of the ethics deliverables that we are currently working on. The ETAPAS project explores the ethical, social and legal impact of the use of Artificial Intelligence, Robotics and Big data in the Public Sector. The increasing use of so called disruptive technologies coupled with the particular responsibilities and demands that are placed on the public sector makes this subject both pressing and interesting. On briefly sketching the background I will present the generic code of conduct as well as the main insights from the literature review that we are working on.

Project website: www.etapasproject.eu/about/overview/

Markus Furendal

The Global Governance of Artificial Intelligence: Some Normative Concerns

The creation of increasingly complex Artificial Intelligence (AI) systems raises urgent questions about their ethical and social impact on society. Since this impact ultimately depends on political decisions about normative issues, political philosophers can make valuable contributions by addressing such questions. Currently, AI development and application is regulated through non-binding ethics guidelines penned by transnational entities, which suggest that the global governance of AI should ideally be democratically legitimate and fair. This paper sets out three desiderata that an account should satisfy when theorizing about what this means. First, we argue that the analysis of democratic values, political entities, and decision-making should be done in a holistic way. Second, fairness is not only about how AI systems treat individuals, but also about how the benefits and burdens of transformative AI are distributed. Finally, justice requires that governance mechanisms are not limited to AI technology, but are incorporated into a range of basic institutions. Thus, rather than offering a substantial theory of democratic and fair AI governance, our contribution is metatheoretical: we propose a theoretical framework which sets up certain normative boundary conditions for a satisfactory account.

Rachael Garrett

Reflections on a Corporeal Ethics in Interaction Design Practice

The spread of artificial intelligence (AI) within the field of human-computer interaction (HCI) has stoked debate surrounding the ethical implications of these technologies. Ethics is generally regarded as a regulatory concern, often approached as an arbitrary checklist of concerns to be addressed during the design process. Yet, theseemerging technologies also present a unique means of supporting, or restricting, the capabilities and freedomsof designers who develop such interactive systems.

We currently work to define the qualities of a Corporeal Ethics, a conceptualisation of ethics that centres on the lived body as our means to interact with technologies. We seek to understand if and how such an ethics can play a rich, generative role in the process of designing interactions with autonomous systems. We briefly reflect on several themes that have emerged from our ongoing design process; the capabilities, freedoms and restrictions embedded in technologies that support design practice, the socio-digital materiality of the design process, and the emotional work of caring and attending to each other within the context of design.

Irja Malmio

Ethics as an Enabler and a Constraint - Conflicting Norms and Values through the case of Project Maven

In April of 2017, the United States Department of Defense announced that Project Maven, an AI induced information technology, was going to be developed by a civilian contractor, namely Google. However, this initiative was met with massive protests from Google employees, which eventually lead to the annulation of the contract. In order to defend the initial undertaking of a military project by a civilian contractor, the urgency to develop AI induced technology in order to gather more accurate information was stressed, which in turn would enable the military to achieve its objectives while minimizing civilian losses. This argument was responded by the civilian community through an ethical discourse centered around the policy of not contributing to technological solutions which could be considered as “doing evil”, in where discussions regarding the ethical implications of AI came to surface in a fated manner.

How can we understand the schism between a civilian undertaker and a military stakeholder from the scenario described above? What was the main objectives to explain the contrasting stances taken by each side? The purpose of this presentation is to describe how conflicting logics of ethics functions as both a constraint and enabler, where the overall dilemma is focused on how to weigh two specific values against each other: the need for national defense against a deeply felt desire to avoid contributing to war. Another issue is how to monitor and regulate emerging technologies in a way that adhere to morally acceptable consensus, and if such consensus is feasible to achieve, and above all, what should a normative baseline for conflicting values be based upon?

Maria Nordström

On the legitimacy of AI for public use: why it is not guaranteed and why it matters

AI systems can and have been introduced as decision-making procedures by public institutions. Such AI systems will most likely be developed by private firms and thus adhere to general AI regulation. However, once a public entity implements an AI solution, I argue that the public entity becomes responsible for the consequence of set implementation. Thus, I believe it is warranted to consider the issue of legitimacy in these cases. Political legitimacy can be understood as circumstances or conditions that entitles institutions to rule and exercise coercion or political power. There are various accounts of the source of political legitimacy, such as accounts that build on the notion of consent, acceptance by its subjects or social contract and the idea of public reason. On a procedural view, a decision is legitimate if it comes about through a process of democratic decision-making. Hence, such a decision may not be fully just. Similarly, just decisions might not be procedurally legitimate. Additionally, on most accounts beneficial consequences do not in themselves constitute a source of legitimacy. Therefore, I argue that commonly assumed expectations of AI to be transparent, explainable and fair do not guarantee the legitimacy of decisions taken by AI systems. Moreover, it is unclear to what extent transparency and explainability (given being achievable) ensure procedural legitimacy, if at all. Therefore, concerns regarding legitimacy need to be further discussed and highlighted in the discourse on AI policies. If there are not policies in place that ensure the legitimacy of decisions made by AI systems, the practice of algorithmic decision-making risks undermining political institutions.

Malin Rönnblom

The political implications of ethics in the discussion of AI use in the public sector

Over the past 30 years, the Swedish public sector has become increasingly marketized, and today a large number of welfare services are provided by private enterprises. This shift not only includes the outsourcing of public sector responsibilities and tasks to private providers, but also includes a transformation of the governing practices of the public sector, introducing market rationalities like competition and efficiency. The market logics have also taken ‘the political out of politics’, to use the argument of Chantal Mouffe (2013), meaning that political conflicts between different interests to an increasing extent have been replaced by administration of ‘best practices’. Parallel to this neoliberal restructuring of the welfare state, digitalisation and IT systems have increasingly been applied in the public sector, and with accelerating speed since the introduction of systems of artificial intelligence and automation.

There is an ongoing discussion regarding the ethical challenges and dilemmas of the implementation of AI systems and automation in society at large, in particular in relation to the practices of the public sector. Researchers and developers are called on to safe-guard democracy, exercise responsibility and make fundamental human values the basis of their design and implementation of AI systems (Dignum 2019). In this paper, I discuss if this focus on ethics risks enhancing the process of depoliticization that already prevails in the public sector, and what this could mean for the possibilities to raise questions around justice, power and privilege when implementing AI systems in the public sector.

References

Dignum, Virginia. (2019) Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, 1st ed. 2019 edition. S.l.: Springer.

Mouffe, Chantal (2013) Agonistics: Thinking the World Politically. London: Verso.

Isaac Taylor

Justice by Algorithm: The Ethics of Automated Criminal Sentencing

Criminal justice systems have traditionally relied heavily on human decision-making. However, new technologies are increasingly supplementing the human role in this sector. Algorithms that calculate the risk of re-offending of defendants are being consulted by judges when they make sentencing and parole decisions. While there have been concerns about accuracy and bias raised about existing software of this sort, it might be thought that, as these technologies improve and surpass human capacities, their use will be morally permissible (or even morally required). I urge caution in accepting this conclusion. Criminal sentencing, I suggest, should ideally have an expressive function: it should involve a form of condemnation. When algorithms – even the best possible ones imaginable – are appealed to, this function may be compromised. This point does not tell against the use of algorithms tout court. It does, however, suggest that significant limits need to be placed on the way in which algorithms are developed and deployed. In particular, it may have implications for the role that the private sector should play in designing risk-assessment software.

Innehållsansvarig:filosofi@abe.kth.se
Tillhör: Avdelningen för filosofi
Senast ändrad: 2021-06-23