Becomes aware of artificial intelligence 2

Artificial intelligence and liability law: the e-person from an economic point of view

One feels a little like “Back to the Future” these days if one follows the legal discussions about an “Update of the BGB” 1 for the digital age. How should a legal text from the analog age be applied in the digital age? Are the legal framework still up to date? In particular the developments towards the so-called fourth industrial revolution and the data society seem to reinforce these questions. Visions of the future and utopias of independent or autonomous machines equipped with artificial intelligence to make independent decisions seem to lead the current analog legal system to absurdity. This applies in particular in the event that such AI systems cause damage and questions of liability law should arise on the agenda.

Classification of the liability discussion of AI systems

Imagine that a child enters the street immediately in front of a vehicle that is equipped with an autonomous steering and braking system and is in full motion. Braking to avoid a collision is no longer possible for physical reasons. The artificial intelligence on which the vehicle is based and the algorithms recognize three possibilities.2 The vehicle can continue to drive in a straight line, which immediately leads to the death of the child (solution 1). The vehicle can turn left onto the sidewalk, where a young couple is walking and who die as a result of a collision with the vehicle (solution 2). Or the vehicle decides to swerve to the right, causing the vehicle to hit a tree, with the impact leading to the death of the “driver” of the autonomously steering and braking system (solution 3). In addition to the fact that an AI system, unlike humans, is able to analyze the information and the consequences of actions at lightning speed and to decide on the “best” of the three deadly solutions, the question of liability for the damage that arises immediately arises .

Three possibilities are conceivable: First of all, one could think of liability on the part of the manufacturer (and the suppliers), in particular through product liability. In addition, liability of the user (operator or operator) or the owner (owner) of the autonomous vehicle could be considered. And finally, an AI system could be liable itself. However, the last possibility has not yet been provided for in the conception of the “analog” liability law. The current law of liability presupposes human behavior. The question is therefore whether an update on digital liability law should also be based on machine behavior in the future. The current legal discussion is considering the conception of a new legal construct in the form of the so-called e-person, 3 d. H. autonomous AI systems could be assigned their own legal personality. For our example, this would mean that the autonomous vehicle is fully liable for the resulting damage in each of the three cases.

Utopia and technological reality of artificial intelligence

The plan to assign an artificial intelligence its own legal personality and to let the AI ​​itself be liable for any damage incurred is ultimately based on a wrong understanding of the technological interdependencies of such a possible e-person. From a technical point of view, damage can usually be traced back to three main problems, which in no way result from willful or negligent behavior of an absolutely autonomous AI system, but rather reveal human activity:

Damage from incorrect or bad data

Artificial intelligence makes data-based decisions. These can be responsible for damage caused by AI. In particular, the so-called development or training data for an artificial intelligence can lead to incorrect decisions. It is conceivable that the context in which the artificial intelligence is used is decisive for an autonomous vehicle. Here it is possible that an artificial intelligence recognizes sheep or other large and small livestock only in front of a green meadow, but not on the street. Such errors can be reduced or even avoided by using suitable training data. In addition to incorrect data, the quality of the data also plays a decisive role in the functionality of an artificial intelligence. In addition, the amount of data is elementary, which in some cases gives rise to questions about access to data (especially in the case of extensive data sets from an industry leader such as Google) 4 as well as questions of standardization and interoperability. In addition, the data quality can be impaired by missing values ​​or incorrect processing. In addition to the training data, the so-called running or production data can also be characterized by poor quality, i.e. H. the data that is collected after the artificial intelligence has been put into operation and that brings about the corresponding updates (in the sense of learning) in the artificial intelligence. As a rule, the manufacturer of an artificial intelligence or a data supplier is responsible for the data quality and there would be corresponding claims under product liability or recourse (§ 823 BGB) against the manufacturer. B. the user of an autonomous vehicle can intervene in the driving process - could also result in liability claims against the user. Finally, z. B. in the form of data poisoning, an external intervention on the data quality can take place, which can result in liability and criminal claims against an external third party.

Damage caused by wrong decision-making processes

The objective function is the basis for the decision-making process of an artificial intelligence. Accordingly, one could think about accident and / or damage minimization as a target function in an autonomous vehicle. If an accident can no longer be avoided - as in our introductory example - the damage should be kept as low as possible. B. Economy, determined, damage can result from wrong motivation. In addition, the probability of an accident can be deliberately influenced by setting an incorrect target function. While liability claims against the manufacturer of the artificial intelligence arise in the first case, the second case also gives rise to criminal claims against the manufacturer or an external third party.

Damage from hardware failure

Damage can occur, for example, because a sensor cannot record the environmental data required for the functionality of an AI. In this case, the liability claim can arise from a guarantee against the manufacturer, or from neglect of the user's duty of care, for example in the event of neglected maintenance and inspection requirements.

All three technological error chains show that AI-induced damage can very well be traced back to human action, which can define a corresponding claim for damages.

Externalities and other market failures

When introducing an e-person, the justification can be incorrect. A corresponding reform of liability law would also have serious economic side effects. What incentive effects result from a liability regulation that transfers all responsibilities for resulting damage to an e-person? Who then pays for the damage caused? From an economic point of view, such a reform particularly favors market failure due to externalities, but also distortions of competition and undesirable distributional effects.

Externalities

A liability law that assigns an artificial intelligence its own legal personality would have potentially serious incentive effects on the producers and developers of AI systems. From an economic point of view, there is a “moral hazard” problem. After all, damage costs caused by AI as a result of faulty artificial intelligence would no longer be borne by the manufacturer, but by an artificial intelligence. This would lead to a kind of risk pooling of different participants in the production and operating process. The incentive to act less carefully would thus be absorbed by other insured persons (the pool) who act carefully. Against this background, manufacturers could externalize potential claims for damages, because the damage costs would be borne by the general public or a corresponding insurance company. This leads to the externalization of costs that should actually be borne by the polluter and thus the manufacturer of an artificial intelligence. In this situation, the private and social costs of AI production diverge, with immediate consequences for the level of care when programming an artificial intelligence. If the manufacturer does not have to bear the consequences of an incorrectly produced artificial intelligence or an artificial intelligence trained with poor data, it will inevitably provide poorer quality.7 One could say that the amount of due diligence is not offset by any added value from the reduction of damage costs or probability. The private level of damage would thus be significantly higher than the socially efficient level of damage. Finally, from an economic point of view, an AI developer should exercise additional diligence as long as the additional social benefits (reduction of damage costs) outweigh the additional social costs (diligence). Without responsibility for the amount of damage costs, there will be no incentive to invest in a higher level of care. Correspondingly, efficient control of the due diligence level can only be made possible through a causer-centered settlement of damage.

In this context, Schaub emphasizes that if the e-person is personally liable, a debt must also be accompanied by a corresponding amount of liability in order to develop the desired legal and economic effects.8 The manufacturer could be held at least partially responsible, especially since he has a corresponding amount of liability would have to provide. This is the direction in which the European Union calls for a liability fund with insurance and mandatory strict liability as a liability regulation, 9 which also provides for liability through no fault and assumes that the operation or placing on the market of an artificial intelligence already poses a risk . A compulsory insurance against AI-induced damage costs would also result in serious distortions of competition and distribution effects of an e-person.

Distortions of competition and distributional effects

In particular, the required amount of liability could directly inhibit market entry for small companies involved in AI development. These companies also make up a significant part of German and international AI start-ups and have little financial resources.10 The potentially horrific liability for damages could thus represent a major hurdle for the establishment of new and the development of existing AI start-ups . The necessary insurance obligation of an e-person could also lead to a problem of adverse selection, especially since an insurance company can hardly differentiate between careful and less careful developers when calculating the insurance premium due to the complexity of artificial intelligence. Against this background, such information asymmetry could lead to all insurance companies offering average premiums (corresponding to the average risk), which are too expensive for good risk types. One could also speak of the cross-subsidization of less careful insured persons by careful insured persons. The result is a crowding-out of good risk types (companies with a high level of due diligence) in favor of bad risk types (companies with low level of due diligence), so that at the end of this chain of effects only bad AI products remain - analogous to Akerlof's "Market for Lemmons" argument .11

Conclusion on artificial intelligence and liability law

It becomes clear that the introduction of an e-person and thus the assignment of a separate legal personality for artificial intelligence systems not only ignores the technological reality, but could also result in considerable market failure problems. Nonetheless, it should be emphasized that systems of artificial intelligence in particular pose liability problems that go beyond the analogous understanding of the current legal system and that, against this background, indicate a need for action. In addition to the steps already taken at European level, there is still a long way to go to keep pace with technological progress and the particular complexity of autonomous systems. Borges, for example, emphasizes that a combination of a reversal of the burden of proof (against the background of the complexity of algorithm-based decision-making processes) with causal liability (possibly with compulsory insurance) is necessary

Regardless of the liability dimension in artificial intelligence systems, the question arises with regard to classic dilemmas as in our initial example. Is it socially desirable for an artificial intelligence to consciously decide on one of the three fatal consequences through weighing up? This immediately raises the question of which criteria or on the basis of which optimization calculation an artificial intelligence should weigh up if the equal legal asset “life” is threatened several times.13 Against this background, the ethical dimension of autonomous systems becomes clear. However, as long as the civil law responsibility of AI systems remains unclear, one should orient oneself on the guidelines that also apply to humans: In the case of unequal legal interests - such as the decision between a human life and an animal or other movable property - the artificial intelligence should recognize the value and be able to decide accordingly. In the case of equal legal interests, on the other hand, if the autonomous vehicle is to decide whether the child (solution 1), the couple walking by (solution 2) or the driver of the autonomous vehicle (solution 3) should die, ethical and moral decency prohibits one To make the “best” decision. Against the background of the numerous ethical questions - which may need to be accompanied by legal norms - the EU Commission has now formulated an ethical guideline 14 that can at least provide orientation, even if many questions remain open.

  • 1 Cf. F. Faust: Expert Opinion Part A: Digital Economy - Analogue Law - Does the BGB Need an Update ?, Negotiations of the 71st German Lawyers' Conference, Essen 2016, Vol.1 / A. Beck.
  • 2 See E. Awad et al .: The Moral Machine Experiment, in: Nature, 563rd vol. (2018), H. 7729, pp. 59-64. The authors analyze the results of an experiment in which they evaluate over 40 million decision rules from several million participants from 233 countries. For the experiment and for an impression of how ethics get into the computer, go to the experiment here: http://moralmachine.mit.edu/ (5.6.2019).
  • 3 Cf. G. Borges: Legal framework for autonomous systems, in: Neue Juristische Wochenschrift, 71st year (2018), no. 14, pp. 977-1048; see R. Schaub: Interaction between man and machine. Liability and intellectual property law issues with independent further developments of autonomous systems, in: JuristenZeitung, 72nd year (2017), no. 7, pp. 342-349; see R. Schaub: Responsibility for Algorithms in the Internet, in: InTeR, 2019, H. 1, S. 2-7; See O. Keßler: Intelligent Robots - New Technologies in Use, MultiMedia and Law, 18th year (2017), no. 9, pp. 589-594.
  • 4 Here it becomes clear that in addition to liability law issues in the context of AI, there are also central competition law issues that regulate access to data in particular. Closely related to this is the question of who data belongs to, especially since the law does not explicitly define ownership of data. Cf. C. Rusche, M. Scheufen: On (Intellectual) Property and other Legal Frameworks in the Digital Economy: An Economic Analysis of the Law, IW-Report, No. 48, 2018.
  • 5 The Product Liability Act (§ 2 Prod HG) defines a product as a "movable object". In this respect, it currently seems legally unclear whether software-based artificial intelligence is a product within the meaning of the law and product liability is applicable. Therefore, the manufacturer's product liability could be excluded in many cases. See R. Schaub: Responsibility for algorithms ..., a. a. O.
  • 6 At this point, at the latest, it becomes clear that the decision-making process is also directly associated with basic ethical motives and value judgments. Finally, the question arises as to which of the three solution options in the introductory example is associated with the least damage.If material damage is compared with a human life, it is immediately clear that the AI ​​should decide in favor of material damage. However, if we compare three deadly solution options with one another, we reach the limits of our ethical advice options. Which option our AI decides on will depend directly on the objective function on which its decisions are based.
  • 7 In this context, it should be emphasized that this quality cannot be correctly assessed by consumers, especially against the background of the complexity of the product AI. There is an asymmetrical distribution of information here.
  • 8 R. Schaub: Interaction between man and machine, a. a. O.
  • 9 See in particular point 58 in the Official Journal of the European Union: European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL)), 252, 18.7.2018, p. 239 -257.
  • 10 Cf. M. Schröder: This is how German AI start-ups perform in an international comparison, in: Handelsblatt from April 23, 2019, https://www.handelsblatt.com/technik/vernetzt/kuenstliche-intellektiven-so-schenken- german-ki-start-ups-in-international-comparison-ab / 24245034.html (7.5.2019).
  • 11 Cf. G. A. Akerlof: The Market for Lemmons: Quality Uncertainty and the Market Mechanism, in: Quarterly Journal of Economics, Vol. 84 (1978), No. 3, pp. 488-500.
  • 12 G. Borges, loc. a. O.
  • 13 Cf. O. Keßler: Intelligent Robots - New Technologies in Use, MultiMedia and Law, 18th year (2017), no. 9, pp. 589-594.
  • 14 See European Commission: Ethic Guidelines for Trustworthy AI, 2018, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (5.6.2019).

Title: Artificial Intelligence and Liability Law: The E-person from an Economic Point of View

Abstract: Artificial intelligence (AI) systems seem to challenge the current legal framework, as rules from an analogous world are being applied to legal matters of the digital age. This is especially noticeable when AI systems cause damage. In this context, the academic legal discussion currently asks whether or not an AI system should be liable itself. This article challenges that discussion, arguing that the debate is primarily driven by an incorrect understanding of the technological potential of AI. Finally, a reform that gives machines a legal personality would have serious economic effects such as externalities, the distortion of competition and distributional effects.

JEL Classification: K13, D62, D82