Logo

The Data Daily

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them - Philosophy & Technology

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them - Philosophy & Technology

Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them
Philosophy & Technology volume 34, pages 1057–1084 (2021) Cite this article
8104 Accesses
Metrics details
Abstract
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.
Introduction
In 2004, Andreas Matthias introduced what he called the problem of “responsibility gap” with “learning automata” (Matthias, 2004 ). In a nutshell, intelligent systems equipped with the ability to learn from the interaction with other agents and the environment will make human control and prediction over their behaviour very difficult if not impossible, but human responsibility requires knowledge and control. Therefore, we as humanity are facing a dilemma: either we go on with the design and use of learning systems, thereby giving up on the possibility of having human persons responsible for their behaviour, or we preserve human responsibility, and thereby give up on the introduction of learning systems in society. Matthias formulation of the responsibility gap has been quite influential especially in relation to the development of autonomous weapon systems (Sparrow, 2007 )(Human Right Watch, 2015 ).
More recently, the concern with “responsibility gaps” has been raised more generally in relation to artificial intelligence (AI), that is to any technique designed to solve problems traditionally assigned to human intelligence (Amoroso & Tamburrini, 2019 ). Risks of gaps have been identified in relation not only to the learning capacities of AI but first and foremost to the opacity, complexity, and unpredictability that these systems generally display (Mittelstadt et al.,  2016 ). In fact, the question so as to what extent persons can or should maintain responsibility for the behaviour of AI has become one of, if not the most discussed question in the growing field of so-called ethics of AI (Braun et al., 2020 ; Coeckelbergh, 2019 ; Nyholm, 2018 ).
However, whereas “responsibility” is known in philosophy and law for being an ambiguous and polysemantic term (Hart, 1968 ; Feinberg, 1970 ), this complexity is rarely reflected in the debates on responsibility for the behaviour of systems that include AI. Therefore, discussions about “responsibility” or “accountability gaps” are sometimes partial as they usually appeal to a non-sufficiently specified notion of responsibility. Moreover, the focus on learning automata or “autonomous systems” may be too limited. Responsibility gaps are due to a multiplicity of factors and are sometimes only aggravated by the presence of machines that learn and act on their own. Footnote 1 In fact, sufficiently interconnected socio-technical systems, with limited artificial intelligence and capacity to learn, but relying on a complex texture of human agents and technical systems, such as bureaucracies or corporates, might also generate responsibility gaps. Considering different causes of responsibility gaps related to AI and automation beyond “autonomy” and “learning” will contribute to carve the problem at the right joints and provide better insights towards possible solutions.
The first goal of this paper is thus reframing the responsibility gap discussion in terms that are better aligned with the categorisation of responsibility concepts in moral and legal philosophy. By doing so, we will be able to better see what kind of responsibility is threatened by which aspect of automation and why this matters. Footnote 2 We take some concepts and distinctions from philosophical, legal, and sociological theory of responsibility (Bovens, 1998 ; Collingridge, 1980 ; Hart, 1968 ; Pesch, 2015 ; van de Poel & Sand, 2018 ), and we use them to identify four different kinds of responsibility gaps: the culpability gap, the moral accountability gap, the public accountability gap, and the active responsibility gap. We will also identify, for each of those, different possible causes, integrating Matthias’ classic analysis that identified learning capacities of automata as the main source of responsibility gaps, generically speaking. To a deeper look, Matthias addressed in his work what we will call the culpability gap – the risk that no human agent might be legitimately blamed or held culpable for the unwanted outcomes of actions mediated by AI systems. Gaps in this kind of responsibility have already received some attention, both from a moral (Matthias, 2004 ; Sparrow, 2007 ) and a legal perspective (Calo, 2015 ; Pagallo, 2013 ). Attention also went to the “accountability gap” in relation to autonomous weapon systems (AWS) (Heyns, 2013 ); (Meloni, 2016 ), and more generally within the discussion on explainability of algorithms and AI (Mittelstadt 2016; Doran et al. 2017 ; Pasquale, 2016 ). However, we propose to distinguish two forms of the accountability gap: the “public accountability gap”, i.e. citizens not being able to get an explanation for decisions taken by public agencies, and a broader “moral accountability gap” – i.e. the reduction of human agents’ capacity to make sense of – and explain to each other the “logic” of their behaviour, due to the mediation of opaque, unexplainable algorithms and complex autonomous systems and/or the lack of appropriate psychological, social incentives or institutional spaces that promote these explanations. One particularly important form of the moral accountability gap is that concerning the difficulty for engineers and other agents involved in the process of technological development to systematically discuss with one another their understanding of the goal and meaning of this process. Finally, the “active responsibility gap” has not to our knowledge been addressed as such so far. This gap consists in the risk that persons designing, using, and interacting with AI may not be sufficiently aware, capable, and motivated to see and act according to their moral obligations towards the behaviour of the systems they design, control, or use. In particular,this gap concerns the obligation to ensure that these systems do not impact negatively on the rights and interests of other persons and, ideally, positively contribute to their well-being instead. Distinguishing four responsibility gaps and their various sources is the focus of the first part of this paper.
In the second part, we will show some of the common approaches that have so far been taken towards addressing responsibility gaps. We show how those approaches offer a partial and limited understanding of the responsibility gap and thus offer solutions that would apply only to specific aspects of them. Those who will be here called “fatalists” (Matthias, 2004 ; Sparrow, 2007 ) tend to focus on a too limited understanding of the responsibility gap, that is a gap in culpability for the behaviour of learning technological automata. Those who will be here called “deflationists” (Simpson & Müller, 2016 ) underestimate the novelty of the AI revolution and its implication for culpability attributions in morality (blameworthiness) and the law (liability); they also seem to underestimate the risks of gaps in the moral and political accountability of system designers as well as gaps in theirs and other agents’ active responsibility for the behaviour of artificial intelligence. Promoters of “explainable AI” and other scientific and technological improvements tend to ignore the psychological, social, and political dimension of the interaction with AI, thereby running the risk of embracing some form of “technical solutionism” (Stilgoe, 2017 ), by which all the moral and social problem of human responsibility for the behaviour of artificial intelligence can be fixed simply by an improvement of the working of AI techniques. Lawyers and policy-makers proposing the revision of current legal liability regimes (including extension of strict and product liability regimes, and “electronic personhood”) may either underestimate the importance of maintaining some form of human moral responsibility on the behaviour of the artificial intelligence or recognise this need but without saying how moral and social practices – and not only legal rules – should change in order to govern a responsible transition to the use of AI. We call this the risk of “legal solutionism”. One result of this critical review is the recognition that the different notions of responsibility, though distinct, are also interconnected and that often addressing one kind of gap requires attention to one or more of the others. This suggests the necessity of a more integrated and comprehensive approach.
We will conclude this paper by sketching such a more encompassing approach which, as we argue, can contribute to address a larger number of gaps in their interconnections. We will suggest that one recent approach to “meaningful human control” (MHC) (Mecacci & Santoni de Sio, 2020 ; Santoni de Sio & van den Hoven, 2018 ) might be suitable to frame the several responsibility gaps within a bigger scheme and offer principles to transversally address them. Future research will develop this proposal in more detail.
Varieties of Responsibility Gaps
The term “responsibility” has different meanings. H.L.A. Hart’s classical account (Hart, 1968 ) lists four senses (role-responsibility, causal responsibility, capacity-responsibility, liability-responsibility). Based among others on the work of John Gardner ( 2007 ), Mark Bovens ( 1998 ), and Ibo van de Poel ( 2015 ), we work with a revised and modified list of four forms of responsibility that are particularly relevant – yet not limited to – the context of automation and artificial intelligence: culpability, moral accountability, public accountability, and active responsibility. The next four sub-sections present these four forms of responsibility and identify the specific related challenges presented, or furthered, by the introduction of artificial intelligence (Table 1 ). Some examples will serve as illustration.
Table 1 Types of responsibility gaps
Full size table
Culpability Gaps
When things go wrong and important interests or rights such as physical integrity or life are infringed, we, as victims and as society, not only want to understand what happened and why. We also want to know whether the harm was the result of someone’s wrong behaviour, and if it turns out that the wrong behaviour is one for which there is no justification or excuse, we want the author to be condemned, sanctioned, or even punished for their behaviour. This is in a nutshell culpability or blameworthiness. Whether and to what extent attributions of culpability make sense has been the main subject of the centuries-long philosophical debate on free will, typically in the light of causal determinism (is it fair to blame each other if all our actions are necessarily caused by previous physical and mental events?) or more generally in the light of a view of human behaviour shaped by (neuro)scientific knowledge (if human action can be fully explained by behavioural/social/neuroscience, what is left of moral culpability?) (for a general discussion along these lines, see, e.g. (Pereboom, 2006 )). However, many philosophers and most lawyers and laypersons do believe that (fair) social and legal practices of attribution of culpability should also be maintained and promoted (as opposed to just relying on any form of social and psychological education or therapy) (Morse, 2006 ), at least to the extent they are the legitimate expression of appropriate moral sentiments by the wronged individuals and society at large (Strawson, 1962 ), they reinforce the social commitments to shared norms (Sie, 2005 ), and, possibly most importantly, they contribute to control and reduce undesirable behaviour. Similarly, also state-administered punishment for serious criminal behaviour may be morally defensible and even desirable insofar as it gives effectiveness to the expression of public condemnation (Feinberg, 1965 ), and serves the utilitarian goals of discouraging similar behaviour by the defendant themselves in the future and by other citizens more generally. Finally, (public) attributions of culpability have the function to compensate the victims – symbolically, or even materially, typically in the case of compensations to plaintiffs in civil litigations.
AI-based systems may put culpability practices under stress in different ways, preventing the realisation of one or more of their goals. Consider, as an example, automated driving systems (ADS). First, ADS may make the network of agents involved in the driving more complex, just because more agents are involved and/or new forms of interactions are created. For instance, a vehicle may be operated by a driver D1, with the assistance of the automated driving system AS, produced by the car manufacturer X, powered with digital systems developed by the company Y, possibly including some form of machine learning developed by the company Z, and enriched by data coming from different sources, including the driving experience of drivers D2, D3…Dn; vehicles in this system are in principle subject to a standardisation process done by the agency S, the traffic is regulated by the governmental agency G, drivers are trained and licensed by the agency L etc. Second, some specific features of present-day learning AI systems may make this interaction particularly unpredictable – typically when the vehicles’ performance is potentially re-designed by the second on the basis of new data acquisition and processing – and opaque, if the reasoning scheme underlying systems’ actions is not easily accessible to their controllers, regulators, or even their designers.
Agents operating in such a socio-technical system (designers, programmers, drivers, regulators, bystanders etc.) may more easily find themselves acting wrongly, for instance, causing an avoidable road crash, while at the same time having a legitimate excuse. Nobody, and certainly not them, could reasonably predict certain circumstances or reasonably avoid a certain outcome, therefore not being open to legitimate blame (Matthias, 2004 ; Sparrow, 2007 ). We call this the “culpability gap”.
The culpability gap has not been created by the introduction of “learning automata” (machine learning) and their inherent unpredictability, as it has been framed by some authors (e.g. Matthias, 2004 ). As a matter of fact, other intelligent, autonomous entities with “no soul to blame and no body to kick” (Asaro, 2012 ), such as, e.g. bureaucracies and corporates, may in themselves generate gaps in culpability. Footnote 3 This has been classically identified as “the problem of many hands” (Bovens, 1998 ; Thompson, 1980 ). Artificial intelligence plays however an important role by contributing to create new versions of the phenomenon and thus making it more visible. Also, the use of artificial intelligence and data-driven machine learning in decision-making importantly introduces a new element of technical opacity and lack of explainability that makes it more difficult for individual persons to satisfy the traditional conditions for moral and legal culpability: intention, foreseeability, and control. Footnote 4
Culpability gaps are concerning insofar as the more persons designing, regulating, and operating the system can legitimately (and possibly systematically) avoid blame for their wrong behaviour, the less these agents will be incentivised to prevent these wrong behaviours. In fact, they will arguably have less incentives to strive for a high(er) level of safety, awareness, attention, motivation, and skilfulness. Also,victims of unjust harm will be less likely to receive compensation. Finally, it might become more difficult for persons more generally to make sense of their moral sentiments in relation to wrongs and accidents and to direct one’s reactive attitudes towards some legitimate target. This may impoverish the human capacity to express moral judgement and may feed helplessness and moral scepticism towards the possibility of understanding and rectifying wrongdoing. As noted by Danaher ( 2016 ), the desire to find a scapegoat to satisfy these feelings may also be fuelled.
Moral Accountability Gaps
Culpability is a particularly serious form of (moral) responsibility, but it is not the only one. Individual persons are often called to respond for their choices or actions in less threatening ways, for instance, when family, friends, or acquaintances ask them why-questions, not necessarily with the intention of judging or blaming them but possibly to just engage in a conversation and to better understand each other’s reasons and expectations. Why were you late at the appointment, why did you start taking guitar classes, why did you turn down that job offer…? We will call this expectation to answer (at least some) why-questions from other persons’ “moral accountability”, to distinguish it from the “public accountability” discussed below. Moral accountability has been presented in the philosophical literature on moral responsibility as a key-aspect for the justification and understanding of moral responsibility practices (Wolf, 1990 ; McKenna, 2012 ). The legal philosopher John Gardner calls it “basic responsibility” insofar as he sees it as the core of what it means to be a reflective person in society (Gardner, 2007 ). In this sense, being accountable, unlike being culpable, is something to be desired rather than avoided insofar as it is a constitutive part of being able to reflect on one’s actions and to participate in meaningful social relations. It also helps persons seeing events in the world as connected to their rational capacities and thereby supporting their sense of agency and responsibility (Honoré, 1999 ). It is a classic view of human responsibility, which can be reported back to the old Socratic motto “know thyself”. In fact, Gardner calls this the “Aristotelian story” about responsibility, insofar as it focuses on the persons’ capacity to make sense of theirs and others’ actions and choices as something connected to their abstract reasons (as opposed, for instance, to just physical or biological events).
Moral accountability also has an instrumental value. The process of exchanging questions and reasons helps finding explanations for things that have happened, reinforces trust and social connections between agents, and by exposing persons to potential requests for explanation and justification, it also tends to reduce undesired behaviour by pushing persons to be more clearly aware of the impact of their actions on others and therefore motivated to prevent unwanted outcomes (and potential blameworthiness). In relation to engineering practices, Genus and Stirling ( 2018 ) have recently stressed out the importance of Collingridge’s proposal to “engage more strongly with accountability in debates bearing on key elements of responsible innovation” (Collingridge, 1980 , p. 62). In their view, also inspired by Lindblom ( 1990 ), accountability is a key tool to enhance the reflexivity of the agents involved in the design and development of (new) technologies, and to promote responsiveness between them and those who will be affected by their creations. In the literature on “responsible research and innovation”, the importance of accountability practices has been recently emphasised and categorised under the label of “responsiveness between stakeholders” (Stilgoe et al., 2013 ).
Moral accountability may be blurred in different ways by the introduction of artificial intelligence. First, in general, similar to what observed above about culpability, by contributing to create a more complex chain of decision-making and action, AI may make more difficult for individual agents to make sense of the reasons why a certain decision was taken, what their role exactly was in the operation, and, in general, whose reasons and what reasoning were governing the system they are part of. However, data-driven machine (deep) learning, due to its intrinsic opacity, might make a system’s behaviour extra hard to understand and explain. In addition, the whole process of technology development and production is arguably pervaded by an increasing pressure towards deploying proprietary technologies that, even when working through mechanisms accessible to their developers and programmers, are designed to be inaccessible to public scrutiny and the users themselves (Pasquale, 2015 ). Technology developers, driven by both the desire to minimise industrial espionage and maximise customers loyalty (by, e.g. binding them to their assistance network), will usually avoid sharing data and engineering insights.
An example of AI affecting moral accountability specifically due to the opacity and complexity of AI may be that of a medical doctor using an AI-driven system for diagnosing. These systems are usually based on deep learning techniques that require a thorough training over a dataset the nature of which is well-known and clear. In other words, the system will train on a set of well-known, well-established cases, before being applied to new and unknown cases. The way knowledge is represented in the machine and the exact way the machine distinguishes between a positive and negative diagnosis are not only inaccessible to the doctor who uses the system but also, to an important extent, to those who designed it (Castelvecchi, 2016 ). Therefore, the capacity of the different agents, including the users, to make sense of the “logic” of the systems’ behaviour may be weakened and sometimes lost, together with their capacity and willingness to engage in a meaningful conversation about their role and the responsibility that comes with it. This may create different kinds of problems, depending on the (professional) context, and the roles, the responsibilities, and the human and social relations pertaining to it. The general concern is that AI may make individual persons less able to understand, explain, and reflect upon their own and other agents’ behaviour. Let us call this the moral accountability gap.
Public Accountability Gaps
One specific form of accountability is attached on politicians, civil servants, and other agents invested with a public function: public accountability. Public accountability, Footnote 5 in Mark Bovens’ definition, is a “relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences” (Bovens, 2007 ). According to Bovens, effective mechanisms of accountability may enhance both the effectiveness of a complex public decision-making system and its compliance with the liberal-democratic values (Bovens, 1998 , 2007 ). Accountability promotes democratic control (transparency), limits abuses of power (corruption), but also brings more effectiveness in the institutions. In fact, providing administrators with information about their own functioning and forcing them to reflect on their successes and failures will eventually allow and encourage them and others to improve their future performances.
There has recently been a big legal and political debate on the extent to which the introduction of AI-based automated (administrative) decision-making is desirable and legitimate (European Commission, 2019 ; Hildebrandt, 2019 ; Noto La Diega, 2018 ). Also, it has been doubted that the GDPR provisions guarantee sufficient transparency and accessibility to these mechanisms for those who are subject to them (Edwards & Veale, 2017 ; Wachter et al., 2017 ). Most of these discussions point to the fact that algorithmic decision-making is often difficult to understand for – and explain to – human agents, due to the different, and sometimes inscrutable for persons, ways of AI operations: the so-called “black box” problem (Castelvecchi, 2016 ). However, Noto La Diega ( 2018 ) correctly notices that issues of explainability may arise not only due to technical black boxes but also due to what he calls organisational and legal black boxes created or aggravated by the introduction of AI in public administration.
Zouridis et al. ( 2019 ) have further explained the sources of these organisational and legal black boxes and their relevance to public accountability. Traditionally, public agencies were organised as “street-level” bureaucracies. Processes were managed by individual case managers who had direct contact with individual citizens and substantial discretionary powers (Bovens & Zouridis, 2002 ; Lipsky, 1980 ). With the introduction of digital decision-making systems, these discretionary powers of the street-level professionals have been disciplined. However, this has also greatly shifted the locus of administrative discretion from individual public officers to IT experts, responsible for programming the decision-making process and translating the legislation into software, and to the data analysts, who are responsible for the acquisition and analysis of data (Zouridis et al., 2019 ). Moreover, these “system-level” bureaucracies are part of larger networks and chains of delegation in which data are exchanged and reused (Van Eck, 2018 ). This shift raises three challenges for public accountability. First, development in information technologies is often outsourced to private parties or tech-giants, such as Google, who are not politically accountable and may not be willing to disclose critical information about the functioning of their systems (Pasquale, 2015 ). For example, in her book Automating Inequality, Virginia Eubanks ( 2018 ) tells the stories of private contractors not being able and/or willing of disclosing the reasons for the failures/mistakes in their digital systems used by some US states for welfare benefit allocation procedures. Second, more generally, the work of the software engineers and data professionals in public organisations is usually not visible and subject to public and legal scrutiny. Finally, far more data are exchanged between many different (public) organisations than in the past. In this way, the introduction of AI makes the “problem of many hands” (Bovens, 1998 ; Thompson, 1980 ) more acute: data coming from different sources are introduced and enriched at different points in the data chain. Individual citizens may have a hard time finding out who they should turn to, if data are incorrect, corrupted, or biased as a collective outcome of a series of minor contributions. Technical, organisational, and legal black boxes are the sources of what we call the public accountability gap with artificial intelligence.
Active Responsibility Gap
The philosophical literature on professional responsibility of engineers usually distinguishes between “active” and “passive” responsibility (Bovens, 1998 ). In a nutshell, active responsibility is forward-looking and concerns the goals, values, and (legal) norms that professionals such as engineers are supposed to promote and comply with as well as the consequences they need to prevent and avoid. Footnote 6 Passive responsibility is backward-looking and concerns the moral and legal consequences engineers must face in case something goes wrong. The three forms of moral responsibility discussed above – culpability, moral, and public accountability – are all forms of passive responsibility. The legal duty to provide high standards of safety and the so-called corporate social responsibility of companies would be typical examples of active responsibility.
One well-known problem with the active responsibility of engineers is that while engineers have arguably an individual active responsibility to promote societal goods, their work is most often embedded in networks of different agents and institutions (Swierstra & Jelsma, 2006 ). They may, for instance, be involved in projects connecting scientists and their academic institutions and technological companies operating on the market. As seen above, this may create problems for the attribution of passive responsibility, as in case something goes wrong, it may be easy (and sometimes legitimate) for individual engineers to shift responsibility to other agents or institutions in their network, and it sometimes may be even the case that nobody can legitimately be held responsible for one specific unwanted event (Van de Poel et al., 2015 ). What is often overlooked is that the networked nature of the engineering work may also create issues with the attribution of active responsibility. As noted by Pesch ( 2015 ), engineers may not have a clear and consistent representation of what their (social) role is – are they scientists, technicians, business persons? what are the goals and values they should strive for: truth, innovation, market shares? They may not even have clear and shared systems of principles, norms and rules to follow in their profession, and/or the capacity or motivation to reflect upon and interpret these rules in concrete cases. Footnote 7
Based on the general framework above, we propose that the introduction of AI may create two different but related sets of issues. First, engineers and other agents involved in the development and use of technology may not be (fully)aware of their respective moral and social obligations towards other agents. Think, as an example, of a manager of an IT company who, as a result of her personal education or the engineering and business culture in which she has been raised, is genuinely and sincerely convinced that: (a) she is benefitting the public by providing them with more comfort through the use of their new products and that (b) it is not her responsibility to try to minimise the possible negative impact of the use of these products on people’s well-being, privacy, or political freedom. Footnote 8 In Van de Poel’s and Sand’s ( 2018 ) classification of active responsibility, this is a gap in “obligation”. Second, engineers and other agents involved in the development and use of technology may not be sufficiently able or motivated to fulfil an obligation they may be well aware of. Think, as an example, of military personnel using a new AI-based weapon system: while being perfectly aware of their general obligation to use the system in compliance with the requirement of international law, they may end up making an illegal use of the system, due to insufficient technical training, and/or not having (yet) been able to develop a sufficient capacity to resist the pressure to use the technology in a certain way, coming from superiors and her environment more generally. In van de Poel and Sand’s ( 2018 ) classification of active responsibility, this is a gap in “virtue”, i.e. the concrete capacity and inclination to perform according to certain norms and principles. Let’s call these two issues, taken together, the active responsibility gap.
Partial Answers to Responsibility Gaps: “Fatalism”, “Deflationism”, and the Risks of “Solutionism”
In the previous section, we have seen how considering different senses of responsibility allows to highlight the existence of four different kinds of responsibility gaps with AI (see Table 1 ). The problem of responsibility gap with AI, as it turns out, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral accountability, public accountability, and active responsibility. Moreover, these gaps are caused by different sources, some of which are old, i.e. the complexity of social and technical systems; some new, i.e. the data-driven learning features of present-day AI; some more technical, i.e. the intrinsic opacity of algorithimic decision-making; some more political and economic, i.e. the implicit privatisation of public administration; and some more societal, i.e. the engineers’ and other actors’ lack of awareness and/or capacity to comply with their (new) moral, legal, societal obligations. Sufficient awareness of this complexity has been missing in the debate so far. Current debates have posed a strong accent, from time to time, on mainly one of these different problems and one or two of their sources thereby often not giving sufficient attention to the broader picture. In this second part of the paper, we make a critical revision of a representative sample of the literature on the responsibility gap, with a twofold goal: first, to show the extent to which this literature misses out on the complexity of the responsibility gap, and second, to suggest that taking such a partial approach to the responsibility gap not only brings the risk of offering an incomplete picture of the problem but also a distorted one. We present three possible distortions in the presentation of the responsibility gap with AI, which we call, respectively: “fatalism”, i.e. the idea that the responsibility gap is a new and intractable problem; “deflationism”, i.e. the idea that the responsibility gap is not new and not a problem; and “solutionism”, i.e. the idea that the responsibility gap is a problem that can be solved by simply introducing new technical and/or legal tools (Fig.  1 ). To be clear, we are not claiming that every author discussing one aspect of the responsibility gap should necessarily also be victim of one of these distortions: as a matter of fact, many authors are well aware that they are addressing only one aspect of a more complex problem. What we are suggesting is that, in the long run, focusing only on these partial analyses may provide a misleading picture of the responsibility gap as well as hindering the creation of an appropriate response. In the last part, we will preliminarily consider how a more encompassing approach, such as designing for “meaningful human control”, might represent such an alternative, more appropriate answer.
Fig. 1
Full size image
Fatalism and Deflationism
At the two extremes of the debate on responsibility gaps are those whom we will call, respectively, fatalists and deflationists. The fatalist approach is best captured by Matthias, 2004 paper on “The responsibility gap for learning automata”. According to Matthias, the introduction of learning automata in society poses an unprecedented challenge to moral responsibility (culpability) and presents us with a moral dilemma. Culpability requires knowledge; learning systems make knowledge (i.e. prediction of outcomes) impossible; therefore, learning automata make impossible to (legitimately) attribute culpability to human persons for the actions mediated by learning systems. We as a society are then facing a dilemma: either we introduce learning systems and give up on culpability, or we maintain culpability, and give up on the introduction of learning systems in society (Matthias, 2004 ). Matthias formulation of the responsibility gap has been quite influential especially in relation to the development of autonomous weapon systems (Sparrow, 2007 ). In this perspective, the responsibility gap created by learning automata (AI) is a new, serious, and intractable problem. We have argued above that the culpability gap is not completely new, and we will suggest below how it could be potentially addressed by designing socio-technical systems with a new notion of human control.
At the other extreme of the debate are those who believe that the culpability gap for learning automata and AI more generally is not a problem after all, and anyway not a new one, so that old technical, moral, and legal recipes will just suffice. We will call these “deflationists”. Some deflationists just embrace the first horn of Matthias’ dilemma. If we have reasons to believe that the introduction of learning automata will bring significant societal benefits, for instance, in terms of efficiency, effectiveness, or overall safety of the processes, we may and should introduce these in society, even if this will lead to some erosion of human moral responsibility (in any possible sense), for the (ex hypothesi reduced in number) accidents. For instance, many believe that, if the introduction of AI in critical tasks such as medicine, transport, and warfare is likely to reduce the overall number of deaths or injuries by reducing the impact of human error, then we should not care too much about the risk of gaps in moral responsibility. This is a too simple utilitarian approach, which tends to obfuscate the moral relevance of individual moral and legal duties and rights (Santoni de Sio, 2017 ). It also ignores the moral relevance of fairness and distributive justice in assessing the moral risks of technology (Hayenhjelm & Wolff, 2012 ). However, Simpson and Müller ( 2016 ) have proposed a more sophisticated version of deflationism, one which embraces the first horn of Matthias dilemma (accepting learning systems and some gaps in moral culpability), while trying to pay due respect to individual rights and distributive justice. Simpson and Müller admit that AI may create some culpability gaps, but, so they claim, this is not new. Also, non-intelligent, non-learning systems like bridges and buildings sometimes collapse (fail) without any human person being culpable for that, and we as a society find this acceptable, insofar as all reasonable precautions had been taken, and nobody could have reasonably prevented the accident. The same can be said about AI. In contrast to utilitarian approaches, Simpson and Müller take a “non-aggregative” and “contractarian” approach to the ethics of risk and claim that we as a society have two obligations: to reduce the aggregate risks involved in the use of technology, and to minimise the risk of harm to each of the persons involved. If these two goals are achieved by the introduction of a given technology, and a “tolerance level” for failure is set that is “as low as technologically feasible”, Footnote 9 then, accidents will happen for which no one will be culpable (there will be a culpability gap), but this will not be a moral problem, because all relevant considerations in terms of safety, justice, and responsibility will be respected.
We agree that culpability should not be pursued always and at all costs, and that some culpability gaps are unavoidable and morally acceptable. However, we doubt that Simpson and Müller have identified a fair system to decide which culpability gaps with AI systems are morally acceptable. Indeed, justice and rights can be preserved by setting reasonable standards of care, which may also allow for fair culpability attribution and the prevention of unwanted responsibility gaps. But this is a statement of the problem, not its solution: it is close to question-begging. To what extent it is possible to establish standards of reasonable care for the design, use, and regulation of AI in the same way in which we do for buildings and bridges is precisely the question raised in present-day (legal) debate on the responsibility gaps for AI. In relation to US law, Ryan Calo pointed out that culpability gaps with AI may happen precisely because the traditional assumptions about what should count as sufficient intention, knowledge, and foreseeability on the side of the defendant (criminal law) may not apply, due to the emergent and unpredictable behaviour of AI (Calo, 2015 , p. 542). Also, traditional assumptions on designers or managers having “exclusive control” on artefacts can hardly apply in relation to systems whose behaviour is influenced by a chain of different actors (manufacturers, programmers, users). How to achieve a fair and right-respecting distribution of risk and culpability in this new context is still open to discussion. Moreover, the accountability gap potentially created by the unpredictable, opaque, and interactive behaviour of artificial intelligence may also make it difficult to establish a priori whether the system will be able to comply with the requirement of fairness: designers and users of artificial intelligence may simply not be in the position to know that a system is discriminatory or otherwise unfair in the first place.
A related concern with Simpson and Müller’s proposal is that by stating that the “threshold of safety” eventually determining the culpability attribution should be “as low as technologically feasible”; they risk to encourage the belief that this is mainly a technical question, one that is up to engineers and “experts” to solve. However, giving engineers and other “experts” the power to decide on this threshold may result in a technocratic approach sharpening rather than solving the culpability gap: technical experts may (honestly) believe that nobody is to blame for an accident because they have done what could reasonably be expected from them, but this may not match with the well-informed judgement of a non-expert and the moral and legal requirements of society. In fact, recent legal history of the standards of negligence in medical practice shows a trend towards abandoning a system in which courts rely exclusively on expert professional opinion for their assessment of professional malpractice. This is due to the recognition that professional opinion may sometimes be “unreasonable” or “irresponsible”, too conservative, biased, or otherwise reflecting the interests of the members of the profession rather than the interest of the public. Also from a normative point of view, courts being “dictated to” by experts were considered as a dangerous shift towards technocracy and not in line with a “right-based” society (Mulheron, 2010 ).
Relatedly, from a broader perspective, old-style division of labour – engineers give facts to regulators and regulators establish whether the technology is safe enough – does not incentivise mechanisms of moral accountability between engineers and societal stakeholders (Funtowicz & Ravetz, 1990 ), insofar as they may not sufficiently promote a well-informed public deliberation on what a “reasonable threshold” of safety (and other values) in emerging technology should be. Nor does this approach incentivise engineers to go beyond the current state of art in technology and look for innovative solutions which may improve the capabilities of current technology to better satisfy complex and potentially conflicting societal demands (Van den Hoven et al., 2012 ) such as, for instance, higher levels of safety combined with better predictability etc. In the terminology introduced above, deflationist strategies like Simpson and Müller’s fail to address gaps in the active responsibility of engineers. They may also fail to address gaps in (public) accountability, insofar as they seem to delegate to experts the setting of a reasonable standard of care.
The Risks of Solutionism
Others recognise the novelty of the responsibility gap but do not believe in its intractability and have tried to offer some new technical and legal solutions to address it. Whereas some of these solutions might in principle be part of a comprehensive strategy to address the problem of responsibility gap in its entirety and complexity, their authors often fall short of providing such a comprehensive plan. Moreover, even when this is not the intention of their authors, these proposals even run the risk of fuelling the temptation of “solutionism” (Morozov, 2013 ); (Stilgoe, 2017 ), the belief that complex socio-technical and political problems can be “solved” (or avoided) by the introduction of new techniques. We distinguish here two main approaches to address the responsibility gaps, the technical and the legal, and we suggest that if taken in abstraction from the broader picture presented above, these run the risk of leading, correspondingly, to “technical solutionism” and “legal solutionism”.
Explainable AI and “Technical Solutionism”
One of the commonly recognised causes of generically defined “responsibility gaps” is, as we have seen in the previous sections, the lack of transparency, explainability, and interpretability of machine-aided decision-making, be it defined as just algorithmic or properly AI. Although there being multiple senses and extents to which a system can be said to be explainable (Doran et al., 2017 ), we will stick to the most basic form, where “a user cannot only see, but also study and understand how inputs are mathematically mapped to outputs”, and where transparency of the whole process is granted. Theorists identified transparency and explainability of algorithms and AI as an important element to safeguard the “traceability” of human responsible agents and, consequently, a fair attribution of moral responsibility (Mittelstadt et al.,  2016 ). We believe that algorithmic explainability, though constituting one interesting element in a complex strategy to address the responsibility gaps, is neither a sufficient nor a necessary condition to address them. Believing the contrary would amount to what we have called “technical solutionism”: the belief that new technological solutions may be sufficient in themselves to address complex socio-technical and political problems.
One reason for explainability not being sufficient to address the responsibility gaps is that, as seen, such gaps are due to different problems that are not entirely addressed by increasing transparency and explainability of the algorithmic parts of a system. Mittelstadt et al. ( 2016 , p. 12), echoing Simon ( 2015 ), correctly pointed out how an important factor determining traceability, still under-researched, is the fact that responsibility is distributed, or diffused, “across a network of human and algorithmic actors simultaneously”, which is closely related to what we previously refer to as “the problem of many hands” and, more generally, “organisational black boxes”. Relatedly, algorithmic transparency and explainability do not necessarily allow for fair attribution of moral and legal culpability. Very complex systems might be in principle understandable and open to scrutiny, but perhaps only “after the fact”, by a very selected audience, and in a relatively large timeframe. But this does not mean that this behaviour may be sufficiently understood and predicted in advance by any of the human agents involved in their design use or regulation. Nor does this entail that any of these agents has been given a sufficient awareness and capacity to comply with some specific obligations, to prevent some outcome, to explain it once it has happened, or both.
Relatedly, explainability might in other cases also not be necessary to address culpability, accountability, and active responsibility gaps. Some culpability and accountability gaps with AI can be potentially addressed by providing some agents along the chain of design, development regulation, use with a sufficient knowledge of the limitations of the technical systems (including their opacity etc.), and a sufficient awareness of their obligation to prevent unwanted results in the deployment of a such technologies (Santoni de Sio & van den Hoven, 2018 ). That is by promoting their active responsibility. In the presence of sufficient knowledge and training, then, for instance, a military commander can be reasonably held accountable and culpable for his conscious decision to deploy an unpredictable technical system in a military mission, which ends up in the unlawful killing of innocent civilians. Similarly, the manager of a car manufacturing company and/or the chair of a road safety agency can be legitimately held accountable and culpable for their decision to put/allow on the public road a vehicle whose behaviour, as they well knew, could not be sufficiently predicted and explained. In a relevant sense, they could and should have known better. Footnote 10
New Liability Regimes and the Risks of “Legal Solutionism”
Some legal scholars and policy-makers have recently recognised that the introduction of AI systems may potentially increase the number of accidents and/or introduce new kinds of accidents and/or increase the number of accidents for which victims may not receive compensation, due to the difficulty of applying existing legal liability regimes, typically negligence and product liability, to any of the actors involved in the network of design and use of new technologies: the legal culpability gap (Calo, 2015 ) (Pagallo, 2013 ). Footnote 11 To address these issues, they have committed themselves to work towards revising or introducing new liability mechanisms, which may allow for compensating victims of accidents involving AI for which no clear human fault can be attributed. Some of these regimes would be the faultless compensation schemes for damages caused by AI systems discussed by Schellekens ( 2018 ) and the introduction of electronic personhood proposed among others by the European Parliament resolution on Civil Law Rules on Robotics (Delvaux, 2017 ) and discussed among others by Koops et al. ( 2010 ) and, in a critical fashion, Bryson et al. ( 2017 ). However, an exclusive focus on bridging the liability gap may be insufficient and potentially self-defeating from the point of view of the broader plan of bridging the responsibility gaps.
As we have explained in Section 2.1, there are several reasons why we might want to preserve fair practices of attribution of moral blame and culpability as well as “active responsibility” practices in addition to fair and effective practice of legal liability and compensation schemes. Answering the questions “who should be legally punished” or “who pays” (Pagallo 2015) for wrong AI-mediated decisions and behaviours is not sufficient to answer the broader question “who is responsible” for them and how to prevent these outcomes in the first place. Liability regimes grounded in individual culpability or fault (such as criminal liability and criminal and civil liability by negligence) might be well-suited to deal with clear and bold individual responsibilities. However, they might be less adequate to cope with substantial shared responsibilities derived from manifold individual small faults. According to an example of van de Poel et al. ( 2012 ), it would not probably make sense to hold individual people liable for their share of pollution, but that does not mean that they cannot be blamed or shamed for that, or that other policy and legislative tools should be used to discourage individual and corporate behaviour from increasing pollution. The same can be said about the effects of digital technologies.
Faultless liability regimes and legal personhood of artificial agents not only risk to shift away attention from culpability and accountability but also from active responsibility. In fact, those approaches might underestimate the importance of promoting proactive, preventive approaches to create safe and societally beneficial technical systems. Liability regimes are managed by the State and require strict standards of causation, evidence, and seriousness. Effective as they may be in (dis)incentivising some behaviour, those regimes do not and cannot cover all undesirable behaviour. It is not possible or desirable, e.g. to have the State checking and judging professional behaviour, but that does not mean that anything goes. Since risky behaviours without (provable) harm would fail to be sanctioned under a liability scheme, professionals’ good conduct can only be granted by relying on their own awareness and knowledge of their moral and legal responsibility towards society, and their individual capacity and motivation to comply with it. Moreover, corporate or civil liability may not be a strong incentive to behave, e.g. for agents or companies who can easily afford to pay any fine or compensation or may rely on the difficulties to enforce legal norms in this field Footnote 12 ; blaming and shaming and strong political initiatives may sometimes be a more effective tool.
In addition to adjust and revise regimes of liability, we also need to create better mechanisms to promote the moral accountability of all agents involved in the design and use of AI systems; better mechanisms of public accountability for those who design or regulate AI systems operating in the public space; and, possibly more importantly, mechanisms and policies to promote a better culture of active responsibility of all the designers, managers, controllers, and users of AI systems.
The Need of a Comprehensive Approach to Address the Responsibility Gaps and the Promises of “Meaningful Human Control”
Our analysis has shown that the problem of responsibility gap is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility – caused by different sources, some technical, other organisational, legal, ethical, and societal. And that partial approaches to the responsibility gap – i.e. focusing only on one form of responsibility and/or only one source of gaps – not only bring the risk of offering an incomplete picture of the problem but also a distorted one, one that hinders the creation of an appropriate response (Fig.  1 ). In this last section, we will try to outline what a more comprehensive approach may look like (Table 2 ). We will do so by referring to what we consider to be a very promising approach to be found in the literature on ethics of AI: the recent interpretation of “meaningful human control” by Santoni de Sio and van den Hoven ( 2018 ). Future work will have to further develop and substantiate this proposal.
Table 2 Meaningful human control (MHC) promoting the four types of responsibility

Images Powered by Shutterstock