You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Intelligent no-fault insurance for robots

Abstract

Artificial intelligence and robotics have become familiar features of everyday life, and are fast becoming ubiquitous in the developed world. The number of different uses for robots continues to expand, creating dramatic increases in the sales of robots, and hence the number of robots in everyday use.

The merger of robots with AI software will mean an increase in the dangers created by many types of robot, as robots become increasingly autonomous and therefore act in unexpected ways. AI will bring about new challenges, including dangers of physical injury and even death. These dangers compel us to ask the question – “When robots do wrong, who should pay?” In an earlier paper (Levy, 2012) I introduced the idea of a form of mandatory no-fault insurance, initially payable at the point of purchase of a robot by its owner, with technology to ensure that the owner renewed the insurance premiums when necessary, failing which the robot would cease to function.

The present paper draws extensively on extracts from the sizeable and fast-growing literature on the subject that has sprouted in the intervening eight years, examining the question “Who should pay?”. Within that literature traditional tort-based remedies have been discussed a plenty and expanded upon, including remedies based on the concept of a robot being regarded as some sort of quasi person who could itself be held responsible for committing a wrong, and therefore that the robot itself could be held legally liable to pay compensation. Another complication to add to the legal mix is the dramatic recent successes in the field of machine learning, allowing robots to learn and therefore to become unpredictable in their actions, as they modify their behaviour according to what they have learned. Diagnosing what was at fault in an accident-causing learning robot will be difficult at the very least, and in many cases impossible.

In addition to considering legal remedies for tort, the paper also examines forms of insurance. The paper recommends the no-fault insurance I proposed, extended by the use of machine learning techniques for the estimation of risks and the setting of insurance premiums.

1.Introduction

Artificial intelligence and robotics have become familiar features of everyday life, and are fast becoming ubiquitous in the developed world. The number of different uses for robots continues to expand, spawning an explosion in the number of robots in everyday use. There are robotic vehicles, including cars that drive themselves and unmanned aerial vehicles (drones). There are robotic consumer products such as the Roomba vacuum cleaner, lawnmower robots, Sony’s AIBO – a robotic dog; and other types of toy robots. There are robots designed to look like people, such as Honda’s ASIMO and the sex robots manufactured by Realbotix. And there are robots for use in education, entertainment, factory assembly lines, disaster site searches; as well as military and security robots, medical robots, etc.

Unsurprisingly the sales of robots are increasing steadily year on year. The International Robotics Federation’s annual World Robotics Report for 2019 announced that worldwide sales of industrial robots alone reached 422,000 in 2018, with a value of USA $16.5 billion, an increase of 6% over 2017 shipments (IFR, 2019). And the International Data Consortium (IDC) forecast that global spending on all types of robotic systems would total $115.7 billion in 2019, an increase of 17.6% over the 2018 figure. IDC expects this spend to rise to $210.3 billion by 2022 – a compound annual growth rate of 20.2% (IDC, 2018).

The merger of robots with AI software will mean an increase in the dangers created by many types of robot, as robots become increasingly autonomous and therefore act in unexpected ways. As Jin Yoshikawa remarks:

“In the following decades, AI will continue to transform more fields and deliver astonishing advancements in convenience, comfort, safety and security. At the same time, however, AI will bring about new challenges. AI will offend, disrupt, crash, breach, incite, injure, and even kill in unexpected ways. Unlike traditional injuries, tort law11 will have difficulty finding the injuries caused by highly sophisticated AI to be the fault of someone’s negligence or some product’s defect.” (Yoshikawa, 2019)

It is therefore necessary to ask the question – “When robots do wrong, who should pay?” In a previous paper (Levy, 2012) I introduced the idea of a form of mandatory no-fault insurance, initially payable at the point of purchase of a robot by its owner, with technology to ensure that the owner renewed the insurance premiums when necessary, failing which the robot would cease to function.

In the intervening eight years the question “Who should pay?” has created a sizeable and fast-growing literature. Traditional tort-based remedies have been discussed a plenty and expanded upon, including remedies based on the concept of a robot being regarded as some sort of quasi person. By considering the concept of personhood for robots, scholars have introduced the bizarre possibility that a robot could itself be held responsible for committing a wrong, and therefore that the robot itself could be held legally liable to pay compensation.22 Another complication to add to the legal mix is the dramatic recent successes in the field of machine learning. A robot that can learn will be able to modify its behaviour according to what it has learned, and therefore it can be argued that the robot’s software is not at fault if it causes an accident. But diagnosing what was at fault in a learning robot will be, at the very least, extremely difficult, and in many cases impossible. In their penetrating analysis of the legal liability for machine learning, Reed et al. (2016) summarize the diagnosis problem thus:

“A complicating factor, however, is that machine learning decisions are hard for humans to understand. This is because the way the technology makes its decisions is not solely designed by a human developer – rather the technology learns patterns and relationships from data, and thereby builds a model of the process involved. Machine learning encompasses a range of techniques, and the level of comprehensibility varies across that range. At one end lie technologies such as decision trees, which embody chains of logic which lead to each possible decision, and thus allow the reasons for any particular decision to be explained. At the other end, neural networks identify patterns in large data sets and then make decisions based on pattern matching. Here it may not be possible for even the technology producer to explain a decision in terms of the logic and causation which the law looks for.” (Reed et al., 2016)

And they tentatively conclude that:

“It is only possible to draw two firm conclusions from the discussion above. The first is that it is far too early to devise a liability regime for machine learning generally, because in the current state of development of the technology the law would rapidly fail to accord with technological change. The second is that, both in respect of liability and the fundamental rights which underpin individual autonomy, society will need to make some difficult choices, because if machine learning is to be adopted widely then the existing legal and regulatory settlement will not cope adequately.”

*****

The remainder of the present paper is structured in six further sections:

Section 2 discusses aspects of the legal liability for accidents – the tort based remedies that often lead to litigation. In Sections 3 and 4 respectively, we consider robot cars and robot aircraft (drones), focussing on their liability regimes and gently introducing the advantages of no-fault insurance. In Section 5 we enter further into the complexities of robot insurance, covering no-fault insurance, usage-based insurance, and insurance funds. Section 6 presents an expanded no-fault insurance scheme from that proposed in Levy (2012). Section 7 consists of a summary, my conclusions, and some predictions.

[NOTE: Additional and very different levels of complexity are created for the legal liability and insurance of medical robots such as the da Vinci surgical robot, and for robotic prosthetic devices such as robotic exoskeletons. In order to avoid any discussion on those additional levels of complexity I do not consider medical products in this paper.]

2.Legal liability

An example of the complexity of the legal quagmire that can arise in the case of robot accidents is the (fictitious) dancing accident described in Levy (2012).

“One evening a young lady named Laura is at a dance, partnering a ballroom dancing robot. The band strikes up with the music for a cha cha cha but performs it so badly that the robot mistakes the music for a tango. So Laura and her robot partner are holding each other but dancing at cross purposes, and very soon they fall over. The robot lands on top of Laura and, being quite heavy, breaks both of her legs. Laura’s father is furious, and calls his lawyers, telling them to commence legal proceedings immediately and “throw the book at them”.

But at whom should his lawyers throw the book? Should it be the dance hall, or the online store that sold the robot to the dance hall, or the manufacturer of the robot, or the robot’s designers, or should it be the independent software house that programmed the robot’s tune recognition software, or the band leader, or even the whole band of musicians for playing the cha cha cha so badly?

Some months later the trial opens with all of these as defendants. Expert witnesses are called – experts on everything from robot software to motion sensor engineers to the principals of dance music academies. What do you think would be the result of the trial? I’ll tell you – the lawyers do very nicely thank you.”

One way to deal with such complexities is suggested by Caroline Cauffman (2018), who believes that “all persons responsible for contributing to the damage caused by [an] AI, either by negligence or intention or creating the risk of putting [the] AI on the market, should be liable in proportion to their contribution to the damage.” So in the case of Laura and her broken legs the band leader would indeed be held partly responsible, as would every member of the band. Though how does one apportion blame amongst all the defendants? Isn’t a no-fault insurance policy so much simpler, less expensive to administer, and all round more convenient for the victim, than a court case with every player from the band as defendants?

The sizeable and fast growing literature referred to above took its lead from a spurt of interest in, and designs for, autonomous cars. In 2013 Kevin Funkhouser proposed a no-fault scheme for autonomous cars, similar to that used in the USA for children’s vaccines (Funkhouser, 2013).33 Initially at least, the emphasis in the law literature was on the question of legal liability. F. Patrick Hubbard, in the Florida Law Review in 2014, summarized the relevant liability law thus:

“Liability law is designed to achieve an efficient balance between the concern for physical safety and the desire for innovation. As a result, the basic tests for design defects and for instruction and warning defects have two distributional effects: (1) Sellers are liable for injuries caused by a failure to use a safer approach that costs less than the injuries it prevented; and (2) victims are not compensated for injuries where a safer approach costs more than the accidents that would have been prevented by the approach.” (Hubbard, 2014)

Hubbard considers two approaches that could be employed to compensate victims of injuries resulting from the distribution and use of sophisticated robotic cars. One is a no-fault first-party44 automobile insurance scheme. The other is a no-fault insurance-type scheme that imposes, on the distributors of automobiles, the costs of establishing a fund to pay for accident injuries. In exchange for establishing this fund, distributors would be immune from legal actions for tort liability.

Hubbard identifies a significant problem in implementing the accident fund approach – it

“requires answers to a wide range of questions that have not been sufficiently addressed by supporters of no-fault schemes. Because the proposed no-fault schemes will be funded as part of the cost of the activity of manufacturing or distributing automobiles, the proposals must not only identify the activity but must also identify the costs associated with that activity. For example, workers’ compensation insurance covers the activity of employment and the injuries incurred while working. Other focused schemes operate in a similar fashion. Would all injuries caused by the activity of manufacturing or of distributing automobiles be covered by the scheme, including not only those involving some possible “defect” but also those involving such things as: (1) human error in driving or in maintenance; (2) bad weather; and (3) situations where the autonomous system was somehow involved but not defective? The scheme would also have to address issues like the following: (1) the nature and level of benefits; (2) the types of injuries covered (for example, would noneconomic damages like pain and suffering be included?); (3) the persons covered (would relationship interests like loss of consortium55 be covered?); (4) coordination with other benefit schemes like workers’ compensation and social security; and (5) administration. Because the [accident fund] proposals fail to address these issues, they are so incomplete that they cannot be evaluated and thus should not be implemented.” (Hubbard, 2014)

Hubbard also considers the option of third-party liability schemes66 to cover car manufacturers who are held “strictly liable”77 for personal injuries caused by their driverless automobiles.

“This proposal is based on the concern that, where an automobile is fully autonomous (driverless), “assignment of liability is more complicated” and “current products liability law will not be able to adequately assess…fault…[because] current law is too cost prohibitive insofar as expert witnesses are likely to be required by the plaintiff.”

“In addition, a “strict liability” proposal will need to provide a new test for defective design, warnings, and instructions to replace the current cost-benefit approach.”88

Additional ammunition is provided by Peter Asaro (2016) against the use of traditional approaches for determining liability in robot accident cases He cites two factors to explain why traditional approaches are inadequate – the unpredictable behavior of autonomous robots, and the fact that robots are not accountable or liable in a legal sense. The unpredictability problem is exacerbated in the case of learning systems that can modify their behavior based on what they learn.

“To the extent that artificial agents become increasingly complex, those who build or deploy advanced AIs and robotics will not necessarily have intent or foresight of the actions those systems may take. This is especially true for systems that can substantially change their operations through advanced learning techniques, or those future systems that might even become genuinely autonomous in generating their own goals and purposes, or even intentions.” (Asaro, 2016)

The practical impossibility of robots being held legally accountable and liable is self-evident – robots cannot legally own any property for a court to award to an accident victim in compensation. However,

“At some point in the evolution of autonomous artificial agents, they might become legal and moral agents, and society will be faced with the question of whether to grant them some or all of the legal rights bestowed on persons or corporations. At that point, some or all of current liability law might apply to those AIs and robots that qualify as legal persons, though it might not be clear what the exact boundaries of a particular entity might be, or how to punish it.” (Asaro, 2011)

3.Robot cars (autonomous vehicles)99

Robot cars employ a variety of sensors, combined with sophisticated software and other technologies, to perceive and monitor their surroundings and to navigate the avoidance of collisions. It has long been known that some 90% of road accidents are caused by human error, so one of the main advantages perceived of robot cars is that, by taking the human partly or entirely out of the driving loop, accident rates will fall dramatically. In a motoring world where errors caused by human drivers are reduced to zero, the vast majority of the accidents that do happen will be down to the technology.1010 Progress towards such a world is being made by the design and development of “advanced driver assistance systems” (ADAS) – electronic systems intended to increase automobile safety and road safety. But no matter how good that technology becomes, accidents will still happen.

The advent of robot cars has opened up a whole new area of motoring liability law, most notably in the USA and the European Union. Baumann et al. (2019) argue that

“Since vehicle functions are controlled by ADAS rather than by the driver, liability should be transferred increasingly to the technical system – and by extension to the manufacturer. Some experts in fact predict a general shift from driver liability to product liability in autonomous vehicles.”

And the traditional legal doctrines of product liability and negligence will be insufficient to handle the changing nature of liability caused by the introduction of autonomous vehicles. To rely on the traditional doctrines could result in an explosion in the costs of accident claims because of the complications surrounding the assessment of liability. However, and even though various driver liability models exist in the different EU member states, Baumann et al. (2019) note that there appears to be a degree of consensus within the EU that some form of no-fault insurance is required.

Alan Eastman (2016) concurs with the view that attributing liability is highly complex in the case of accidents involving cars operating autonomously:

“Given the complex nature of autonomous technologies, not only in their design but in how the technologies interact with each other and with humans, the task of assigning liability may be impossible to accomplish in a cost effective way with our current tort system. In addition, potential liability issues and uncertainty over just who may be liable could slow the development and adoption of autonomous driving technologies that offer so many benefits to society.”

“So, who is liable in the event of an automobile accident? Some legal scholars suggest assessing responsibility using products liability principles and focusing on the driver’s level of reliance on the autonomous vehicle. In purely autonomous mode, probably the manufacturer is liable based on manufacturing defect or design defect. If autonomous mode is disabled, probably the driver is liable due to negligence. When switching in and out of autonomous mode, probably the driver is liable, except manufacturer’s liability may be extended even in cases of driver error due to manufacturer’s failure to warn, or warning defect. Determining the driver’s level of reliance on autonomous features can be problematic, especially if one debates the appropriateness of driver decisions to engage and disengage autonomous features. Another complication stems from the failure to warn or warning defect, whereby manufacturers have a duty to provide instructions on the safe use of their product and to warn consumers of hidden dangers. Situations that seem to be driver negligence could be turned into manufacturer’s liability if it can be shown that manufacturer training programs were inadequate for the general driving population.” (Eastman, 2016)

Reasons cited by Eastman as to why no-fault auto insurance plans have failed to proliferate in the past, include the failure of no-fault plans to lower insurance premiums, and opposition from trial lawyers who would understandably be aghast at the thought of losing a highly lucrative segment of their revenue streams. But in favour of no-fault insurance Eastman points to research in the USA by O’Connell et al. (2011) which presents:

“an excellent argument that the shortcomings of no-fault insurance have more to do with how no-fault statutes are structured by state legislators and less to do with the concept itself. Their review of forty years of state experimentation with no-fault insurance shows that “data support the notion that no-fault is a far better compensation system than tort: it succeeds in paying more people faster and more in line with their economic needs.” (Eastman, 2016)

Eastman concludes that:

“Different approaches to the liability issue have been proposed, but revised no-fault automobile insurance maintained in the private sector offers the most benefits and the best chance of becoming a reality. America is a litigious society and one in which the party at fault is required to pay. As the determination of who is at fault becomes more difficult and costly, Americans need to accept a no-fault solution.”

4.Robot aircraft (drones)

“Drones are essentially robotic aircraft. They can be operated with a “pilot” sitting in a ground station, but many will have autonomous capabilities where the aircraft will operate on its own. These autonomous aircraft will operate through advanced software systems coupled with sensing hardware and GPS navigation packaged in a highly maneuverable airframe. A key feature will be an autonomous anti-collision system that must not only protect the drone from collisions with other drones but also protects it from collisions with birds, other aircraft, buildings and structures. The risks of crashes and incidents caused by drones in the national airspace are currently unknown. Risk profiles have yet to be determined due to the lack of available information. Insurance carriers may be able to extrapolate loss experience from the aviation industry but [this] will need to be adjusted for the issues of robotic autonomy in flight, autonomy in collision avoidance, and autonomy in critical issues such as lost links in which communications are cut off and the drone must make decisions on its own.” (Beyer et al., 2014)

The commercialization of these aerial robots is increasingly pervasive and fast becoming ubiquitous, bringing a whole new collection of risks to society. These risks include not only personal injuries and fatalities, but also damage to property and a liability for the collection, transmission and use (or misuse) of data – both personal and commercially sensitive data. The risks of injury and death presented by drones are very real, and drone accidents can be even more catastrophic than those caused by road vehicles, due to the possibility that a single drone accident can be directly responsible for bringing death and injury to hundreds or even thousands of people. As yet there is no recorded instance of a drone colliding with a passenger aircraft with disastrous consequences, but numerous near misses have been reported and it can only be a matter of time before the first such catastrophe occurs. When that happens, perhaps the inherent danger created by the mushrooming drone market will be better recognized by governments, and mandatory insurance will be applied to drones. In the USA, for example, the Federal Aviation Authority (FAA) has already made it mandatory to register recreational drones, with payment of a $5 registration fee.

Regulation can certainly be part of the solution. By late 2019 at least 18 countries had introduced regulations to cover the operation of drones (Wikipedia, 2019). In the USA, for example, the FAA places restrictions on where it is legal to fly a drone.

“Anyone flying a drone is responsible for flying within FAA guidelines and regulations. That means it is up to you as a drone pilot to know the Rules of the Sky, and where it is and is not safe to fly.” (FAA, 2019)

But keeping within regulatory guidelines will not be sufficient to prevent drone accidents, because the technical systems responsible for operating drones will be susceptible to various software-related risks.

“Software failure to perform as intended could cause a drone to crash resulting in personal injury or physical damage. Software can also fail by becoming corrupt, losing connectivity, or “crashing”, sometimes wiping or rendering itself unusable. The software could mistakenly send legally protected data to an unintended recipient resulting in breach notification liability. Software data in transit could be hacked by a criminal who steals the data, or the drone itself could be hacked and taken over remotely. Hackers could also gain access to video, camera, or other sensor feeds and use the information gained to commit other crimes. Owners and operators of drones will need coverage for these risks as well.” (Beyer et al., 2014)

Although regulation can be part of the solution for dealing with such risks, it can not provide the whole answer – some form of insurance is also essential. In the early days of the commercial drone industry the essential nature of insurance for commercial drone applications was summed up by Darryl Jenkins, an analyst for the Aviation Consulting Group, as follows:

“While FAA integration is a sufficient event insurability is a necessary event before businesses can successfully use UAS [unmanned aerial systems] in the National Airspace System because no business is going to want to be on the line for the liability concerns…Insurability will determine which sectors of the [drone] market will grow and which will die”. (Jenkins, 2013)

5.Insurance

Insurance for robots is a complex matter. Andrea Bertolini (2016) characterizes the issue thus:

“Insurance companies face too complex a challenge in precisely assessing the risks associated with the production, use and diffusion of robots of various kinds. In particular, the complexity and novelty of robots causes the identification of the damages they may bring about in a real life environment to be extremely complex, diversified and hard to foretell, and hence manage. The same kind of technical malfunctionings may indeed determine very different outcomes once the device is used in different and ex ante1111 unrestrained environments. The complexity and to some extent opacity – if not inadequateness – of the legal framework further adds upon such considerations, causing the assessment of the risk pertaining to each party involved (be it the producer or user) to be even more complex. Indeed, in some cases, it is not even clear which party may be held liable, hence ultimately who should have an interest to acquire insurance coverage.

Overall, this may result either in the (i) refusal to insure some kinds of robotic devices, or (ii) the use of existing [insurance] contracts, which however may prove inadequate, or (iii) the charging of excessively high premiums, ultimately delaying the diffusion of robots as well as impairing the proliferation of a supply side of the economy (industry for the production of robots).” (Bertolini, 2016)

In considering the risks created by autonomous robots we are informed by a risk assessment Yeomans (2014) carried out for autonomous vehicles by Lloyds of London, the world’s largest insurance market. The purpose of that assessment was to understand the implications of new technologies for the insurance industry.

“The vast majority of accidents are caused by human error, and in theory by replacing human input with well programmed computers, the risks of driving could be substantially reduced. With less reliance on a human driver’s input, however, increased risk would be associated with the car technology itself. Computers can do many things that a human driver cannot: they can see in fog and the dark, and are not susceptible to fatigue or distraction. However, they can also fail, and systems are only as good as their designers and programmers. With an increased complexity of hardware and software used in cars, there will also be more that can go wrong.” (Yeomans, 2014)

One of the risks that the Lloyds assessment identifies for autonomous cars is “Cyber Risk” – the malicious interference by hackers with a car’s behavior.

“To address cyber risks, high standards of system resilience, such as robust data encryption, will need to be engineered. Consideration will probably also have to be given to how networking with other cars, infrastructure, and personal computers such as smartphones could impact the cyber security of a car.”

“Cars are likely to deal in large amounts of data, and may be increasingly connected with existing technology such as smartphones and tablets. Through connectivity with other personal digital technology, as well as other cars and infrastructure, there could be potential for unwanted third parties to access data. Although cars already have computerised units, at present they tend to be isolated, not networked, and therefore at less risk. As cars become more connected, it could be possible for hackers to access personal data, such as typical journeys, or where a person is at a particular time, which could for example allow a burglar to know when a householder is not at home. It is also feasible that driving could be maliciously interfered with, causing a physical danger to passengers. There is potential for cyber terrorism too – for example, a largescale immobilisation of cars on public roads could throw a country into chaos.” (Yeomans, 2014)

The Lloyds assessment admits to a similar concern regarding the vulnerability of drones to malicious cyber interference, and similar concerns will also apply to any other robots that could be maliciously hacked to act against the wellbeing of their owner/operator and/or third parties.

A further risk for drones is the loss of a data link between a drone and its controller. Commercially available drones are usually operated remotely, from the ground. This requires a resilient two-way data communications link between the drone and the operator.

“There is a risk that if this link was lost or disrupted, the aircraft would be out of control. To mitigate this risk, attention needs to be paid to the security and strength of data connections, and backup options should be considered in case the connection fails. One precaution that exists is for the aircraft to return to its take-off point if it loses signal, using systems such as Global Positioning Systems (GPS). Adequate sense and avoid capabilities would also mitigate the risk of an aircraft losing contact with its operator, as it should behave autonomously in choosing where it travels in order to avoid a collision.” (Yeomans, 2014)

In much of the extensive literature on the question of who is (or should be) held legally liable when a robot causes damage or injury, we can observe that, in many jurisdictions, product liability rules make the robot’s manufacturer liable for whatever damage is caused, even though the robot’s user or a third party might be at fault. The reasoning is that the manufacturer is perceived as being the party best suited to minimize the risks created by the robot, and it is usually the party best able to meet the financial consequences of a robot accident and to deal with its sub-contractors such as software developers whose code might have caused the accident. Incidentally, there is a school of thought that making the manufacturer liable will encourage safer robot designs and manufacturing processes (Thierer & Hagemann, 2015). Whatever the rationale, it is clear that holding the manufacturer liable might not provide sufficient protection for the user and third parties who come into contact with the robot, because a manufacturer might not have sufficient financial resources to properly compensate accident victims. As a result, and irrespective of who is liable, users and third parties need the protection of adequate insurance policies, backed by a compensation fund. A compensation fund alone is not likely to be sufficient because it would be responsible for settling the ultimate liabilities resulting from all robot accidents within its jurisdiction – a considerable financial strain on the fund.

A European Union parliament resolution of February 16th 2017 (EU, 2017) considers that a compensation fund would “ensure that reparation can be made for damage in cases where no insurance cover exists.”1212 But such a fund “should only be a means of last resort and should only apply in cases of problems with insurances” as well as users having no insurance policy (van Rossum, 2018).

That same EU resolution called on the European Commission to consider establishing a compulsory insurance scheme whereby manufacturers or owners of robots would be required to take out insurance cover for the damage potentially caused by their robots. Hitherto the EU liability system contained gaps in respect of: (a) the liability of autonomous systems (including driverless cars); (b) the lack of tortious liability of the user of an autonomous system; (c) the lack of a tortious liability and only a selective strict liability of the operator of an autonomous system; and (d) no liability at all for the manufacturer for the faulty behavior of an autonomous robot (Borges, 2018).

In its analysis of how liability rules and insurance should be made appropriate for driverless cars, the EU concluded that:

“The new social and technological reality of connected and autonomous vehicles calls for the development of an EU-wide insurance remedy. Given the rapid developments in the market for autonomous vehicles, a special no-fault insurance, complementary to the injured party’s entitlement to social security benefits, replacing civil liability claims for damages, may provide for a flexible and satisfactory solution. Both in terms of legal certainty, its potentially wide scope of protection and efficient claim handling, it appears to outweigh the adversarial options under the PLD1313 and the national traffic liability laws.” (Evas, 2018)

The proposal in the EU’s resolution is based primarily on four of its conclusions, all of which apply equally to robots in general:

  • 1. “No-fault insurance could be taken out by the owner or operator of the autonomous vehicle or by its producer, but it is characterised by the fact that is does not rely on any liability.

  • 2. Its value is not so much that the risk that persons suffer damage caused by the motor vehicle is directly insured, as it is that it does not present an adversarial compensation scheme (as third party liability insurance does).

  • 3. The statutory obligation to take out the no-fault insurance for autonomous vehicles would befall the owner/operator, the producer (and/or the software producer) or both. As far as the premium payments are concerned, a variable part of the premiums could be paid by individual owners/operators, inter alia based on the individual’s annual mileage, his type of car and age, and a fixed part (albeit with risk differentiation) to be paid by the industry.

  • 4. This could take the form of a compulsory private insurance, for the part of the damage that is not covered by social security, thus complementing the injured party’s entitlement to social security benefits.” (Evas, 2018)

No-fault insurance could, in principle, be taken out by the owner or operator of a robot, or by the robot’s manufacturer. A significant disadvantage of obliging the manufacturer to insure is that they will not know for how long the owner or operator will be using the robot, nor when it is abandoned completely or destroyed. An owner will only need to be insured during periods when their robot might be used, and they may therefore suspend the payment of their insurance premiums if they do not plan to operate their robot for an extended period.1414 It therefore seems appropriate for robot insurance to be the responsibility of the owner, as with motor insurance, and I advocate that the first premium should be included with the robot’s purchase price, in order to guarantee that the robot is insured when first used. The EU’s official publication on its approach to liability rules and insurance (Evas, 2018) suggests that a variable part of the premiums for automated vehicles could be paid by the individual owners/operators, and a fixed part “(albeit with risk differentiation) to be paid by the industry.”

5.1.No-fault insurance

No-fault insurance has its origins in 1960s America. By then the number of serious automobile accidents had become the source of a litigation explosion that was ‘straining (and in some cases overwhelming) the judicial machinery.’ (Marryott, 1966). In 1964 an article in the Harvard Law Journal proposed a plan mandating all automobile owners to purchase a new form of insurance which the authors called “basic protection coverage”, for which an accident victim did not need to prove fault (Keeton & O’Connell, 1964). Their initiative was first introduced in law, in a modified form, in Massachusetts in 1970, opening the way for the adoption of what is now known as no-fault insurance.

Engelhard & de Bruin (2018) are amongst those who believe that a no-fault insurance scheme will be the most effective way of addressing the risks of autonomous driving. They are attracted to no-fault insurance because it can be designed to cover all risks. Different flavours of no-fault insurance are possible, for example pay-as-you-drive schemes (see footnote 14). The main benefits of no-fault insurance are that it does not rely on being able to prove culpability for an accident, and that it prohibits subsequent tort litigation by the claimant, thereby reducing the costs of a claim (other than the amount of compensation itself) to the administrative costs of the insurer. And without litigation there will be no scope for juries or the courts to award gross amounts in punitive damages.

Thus far no-fault insurance has not been hugely popular with motorists, but I join Eastman in believing that that will change.

“Complexities and costs involved with determining who should be responsible for damages may cause regulators, insurance companies, and consumers to reconsider the benefits of no-fault automobile insurance. Revised no-fault automobile insurance can provide fair compensation while keeping uncertainty about liability from deterring the advancement and implementation of autonomous vehicle technologies.”

“The purpose of no-fault auto insurance is to reduce (or eliminate) the amount of money going to administrative and legal fees spent in the tort liability system to determine who is at fault in an accident. Instead, those dollars can be used to pay for actual damages incurred in automobile accidents, resulting in more equitable compensation for economic losses paid, and with compensation paid in a more timely manner. Two elements must be present in no-fault automobile regimes: (1) payment of no-fault first party benefits (called personal injury protection or PIP), and (2) restrictions on the right to sue, or limited tort options.” (Eastman, 2016)

5.2.Arguments against mandatory no-fault insurance

I believe that intelligent no-fault insurance should be mandatory for robot owners in order to ensure their ability to compensate victims of accidents involving their robots. But making no-fault insurance mandatory is not seen by all commentators as a universal panacea for compensating robot accidents. Omri Rachum-Twang (2020) believes that mandatory no-fault insurance “does not adequately resolve the challenges of AI-based robots…and is not free of concerns, some of which undermine the entire concept.”

Rachum-Twang’s first criticism of no-fault regimes at large, is that “while no-fault regimes may be more efficient due to their savings in administrative costs and judicial errors, they may increase the number of accidents due to lack of deterrence.” He explains that:

“Several commentators have attempted to prove or disprove this theoretical assumption. In the context of AI-based robots, it is questionable whether this is even a factor. Given that robots are not given legal personhood as of now and that their behavior is, at least to some extent, unforeseeable to their designers, operators, and users, it is questionable whether the concept of deterrence is even relevant. But if we do assume, or at least aspire, that our liability concepts will have some ex-ante effects on behavior, whether with respect to human stakeholders or to the robots themselves, we must take into consideration the effect no-fault regimes may have on such deterrence.”

As Rachum-Twang admits, the lack-of-deterrence-assumption is just that – there is no empirical evidence for it.

His second objection, which he regards as being more important than the first, is that:

“since a mandatory insurance scheme is intended to fully supplant the general tort system, it requires a hermetic and wide adoption by all stakeholders. This was politically difficult to achieve in a non-AI-robots world. It is no surprise that such models were adopted in the automotive context which is, by nature, relatively local and physically bound. But this becomes even more difficult in the context of robotics. The physical-digital nature of many robots makes it almost impossible to have a single rule that will apply to all stakeholders. The manufacturer of the robot could be American, the operator British, and the end user Japanese. For a perfect mandatory insurance to be feasible, all relevant jurisdictions must adopt it, a task that seems politically impossible. This is not to say that a mandatory insurance model could never work for risks associated with AI-based robots. We can definitely consider such a model in the context of autonomous vehicles, which are quite similar to regular cars in this context. It could also be feasible in the context of medical robots, at least in jurisdictions that have mandatory health insurance or collective health benefits. But my point here is that the lack of physical borders in many circumstances related to AI-based robots and the political impracticability of adopting global or cross-jurisdictional mechanisms make the no-fault model irrelevant as a general solution to the shortcomings of tort law doctrines in this context.”

I disagree with this point on the basis that the functioning of the EU has shown that cross-jurisdictional mechanisms do work.

A third criticism from Rachum-Twang is that:

“even if we believe that a no-fault model could apply to all or some risks related to AI-based robots, the questions of cost allocation and premium estimation become much more difficult to address in the context of AI-based robots. As explained above, the multi-stakeholder feature of AI-based robots raises the question of who should pay for such insurance costs. In the current case of automotives, drivers (and passengers) are largely both the tortfeasors1515 and victims of car accidents. It is therefore relatively easy to determine that drivers should have a duty to purchase mandatory insurance policies covering such risks, assuming that there is a balanced cross-subsidy between drivers. But when AI-robots are governed by their designers, operators, and users, the question of who pays insurance premiums becomes more complex. In fact, it is expected that most accidents involving autonomous vehicles will result from human behavior (not necessarily of the driver, rather often of pedestrians)”.

Rachum-Twang concludes that mandatory insurance for drivers or manufacturers of robot cars will impose liability upon and deter the wrong tortfeasors.

“If that is not enough, the augmented harms that characterize AI-based robots, as well as the un-foreseeability problem, are destined to make the task of determining insurance premiums almost impossible.”

But the scheme which I propose in Section 6 includes an artificial intelligence approach for calculating insurance premiums, which is one reason why I predict that mandatory no-fault insurance against robot accidents will become the norm within a decade at most, employing AI learning techniques for estimating risks and calculating premiums.

5.3.Usage-based insurance

There is a category of auto insurance schemes which is based on usage, both the quality of driving – how the vehicle is used – and the periods for which the vehicle is being driven – the amount of usage (Baumann et al., 2019). Such systems depend on the existence of certain technology in the vehicle – technology that monitors the “how” and the “how long”, and can transmit this data to a central registry or an insurer. The data is useful for various purposes. It can contribute to the insurer’s calculations of the insurance premiums, and it can alert the insurer and accident authorities to the location and technical circumstances of an accident. The idea of employing technology to monitor driver behaviour is far from new – the first tachographs for cars were introduced in the 1920s, and In 1970 they became mandatory for buses and trucks in EEC member states.

The data created by technology within a robot car could also be employed to help detect cases of product liability, so that manufacturers can be held culpable and accountable for an accident, thereby giving insurance companies the right to seek compensation from manufacturers irrespective of the owner’s insurance policy. “Such systems are also attractive from a driver’s perspective, because it is likely to reduce accidents caused by poor driving behaviour (for example speeding) and offer financial incentives – reduced premiums – for good driving behaviour.” (Baumann et al., 2019).

5.4.Insurance funds

As a possible alternative to using an insurance fund as a backup for no-fault policies, Carrie Schroll (2015) has suggested using such a fund to cover the entire insurance payout, based on a no-fault system. She discusses the potential parties who could be held liable for the payouts for self-driving cars involved in auto accidents: the car drivers, car-sharing companies, and automobile manufacturers. She suggests the complete elimination of liability for any accidents involving self-driving cars, and recommends the creation of a national insurance fund to pay for all damages resulting from those accidents.

Schroll believes that a large-scale insurance fund paid for by all parties could be operated by a government run agency.

“The Fund would work similarly to how Federal Insurance Contributions Act (FICA) contributions to Social Security and Medicare operate now. FICA contributions are removed from employee paychecks each month and placed into trust funds for both Medicare and Social Security. Those who qualify for Social Security or Medicare benefits can then apply to receive payments from the funds. Many employees will never use the money, but it is there for everyone who needs it. A governmental agency is in charge of both processing the claims for benefits and paying the appropriate amount to claimants.”

The insurance fund Schroll proposes would operate in the same way.

“Riders, car-sharing companies, and manufacturers would all contribute through taxes and in proportion to how much they benefit from the use of autonomous vehicles.”

The money would be stored in a trust fund and overseen by a government department. Anyone who suffers damages from an autonomous vehicle accident would file a claim with the relevant department, who would examine the claim and make appropriate payments to accident victims.

“Initially, all manufacturers and car-sharing companies would pay at the same rate per car. However, if a particular manufacturer’s or company’s cars are involved in accidents more frequently than the average rate, their tax rates would be increased. This would appropriately incentivize manufacturers to increase their products’ safety, and car-sharing companies to purchase the safest cars. It would also ensure that those companies whose vehicles had the most accidents, and therefore used the fund most often, paid more into the fund.”

The administrating government department would not only examine the claims and pay damages based on the information provided by claimants. It would also regularly update insurance adjustors’ data employed to determine how much certain injuries cost. That data could contribute to the learning process in a robot car.

“An appeals process would be in place, where claimants can send additional information or have an administrative hearing if they feel that the agency did not award the proper amount. Because no one is truly “at fault” for the accident, the agency would follow the lead of no-fault systems and only award compensatory damages rather than punitive damages.” (Schroll, 2015)

The principal advantage of such a fund would be that, by banning litigation and using the fund as the sole method of compensating accident victims, injured parties will avoid the exorbitant costs of litigation, and legal systems would no longer create the possibility of a judge or jury awarding a victim an excessive amount in punitive damages, which is not uncommon in the USA. One important disadvantage would be that all successful compensation claims would need to be satisfied by the fund, putting it under huge and possibly fatal financial strain.

An all-embracing variation on Schroll’s proposal is a universal social insurance scheme and fund, recently suggested by Jin Yoshikawa (2019), based on the accident compensation scheme that has been in operation in New Zealand since 1972.1616 His proposal is for…

“…a social insurance scheme that covers all personal injuries regardless of fault and whether AI was involved. The proposed solution properly balances the public’s interest in receiving AI’s benefits as soon as possible with victims’ interests in just compensation. Going forward, lawmakers will also need to consider appropriate responses to intangible injuries, such as economic injuries, emotional and psychological injuries, improper discriminations, and breaches of privacy. A complete and effective social welfare system will maintain public confidence in the development of AI and support the continuing growth of AI industries.” (Yoshikawa, 2019)

The New Zealand scheme on which Yoshikawa’s concept is based, has three essential components. It covers all personal injuries arising from accidents, no matter who was at fault. It abolishes the accident victim’s right to bring tort claims for injuries.1717 And it is financed from general national tax revenues and from taxes on certain activities. “This scheme has now been in place for nearly five decades and continues to enjoy great public support.” (Yoshikawa, 2019). Interestingly, personal injuries caused as a result of the behaviour of an artificial intelligence are already covered in New Zealand.

In identifying the potential disadvantages of his proposed scheme, Yoshikawa points to the elimination of the tort regime for personal injuries as depriving accident victims of the right to retribution and social justice. My response to this argument is that pragmatic considerations should outweigh the desire of some accident victims to have their day in court. Given that, 50 or so years ago, there was concern in the USA that the number of automobile accidents was “the source of a litigation explosion that was ‘straining and in some cases overwhelming the judicial machinery’,”1818 such concerns must already exist as a result of the added technical complexities inherent in robot accidents. And we are only just beginning to see the tip of that particular legal iceberg.

Another potential threat identified by Yoshikawa, to the widespread adoption of a New Zealand-like scheme for robot insurance, arises from the certainty that the proliferation of AI could increase injury rates and make such an insurance scheme “unsustainable”. Against this fear I would argue that the inevitable increase in AI and robot related accidents demands insurance, and that it is deciding on the type of insurance best suited to such accidents which poses the big question, not whether or not insurance is the best solution.

6.An intelligent insurance system for robots

Bertolini does not believe that mandatory insurance is a sufficient solution. He suggests that “an effort is also required both with respect to technological development and legal assessment”, and that “all such efforts would help provide the necessary conditions for the development of specific insurance products for robotic devices, at once allowing the proliferation of a new market for risk management products in technologically advanced sectors, as well as the establishment of a sound and competitive robotic industry.” (Bertolini, 2016).

My contention is that, irrespective of any such efforts, mandatory no-fault insurance is an essential requirement if robot accident victims are to be adequately compensated for their misfortunes. Efforts to perfect technologies and to establish improved legal frameworks are all well and good, but what accident victims most need and deserve is compensation that can be banked.

The insurance scheme described below was inspired by a paper in the Nordic Journal of Commercial Law, by Anniina Huttunen and her colleagues in Finland and Germany (Huttunen et al., 2010). To the best of my knowledge Huttunen et al. were the first to propose and expound upon the desirability of insuring against robot accidents. They summarize this need/desirability thus:

“The development and use of intelligent machines faces tremendous challenges in current legal systems. Technological development is stifled by liability risks. Currently, the manufacturer or operator is held liable depending on the circumstances. Due to both the technological limitations for perfectly functioning machines and the unpredictable cognitive element, intelligent machines are not perfect and it is almost guaranteed that there will be failures causing harm. However, this is not an excuse not to aim for failure-free operation. Instead, the inevitable failures should be managed so that present economical or legal issues do not hinder the potential human development and prosperity enabled through the adoption of new technologies.”

“We propose a new kind of legal approach, i.e. the ultimate insurance framework, to solve the related legal and economical difficulties in order to support the technological pursuits. In the insurance framework, a machine can become an ultimate machine by emancipating itself from its manufacturer/owner/operator. This can be achieved by creating a legal framework around this ultimate machine that in itself has economical value. The first step in creation of a legal framework around the machine is to make applying for the insurance mandatory.” (Huttunen et al., 2010)

My 2012 proposal went even further than Huttunen et al., by advocating:

“…an electronically-driven monitoring process for law enforcement agencies, whereby they can check if a robot is adequately insured. If the detection method discovers that adequate insurance is not in place the robot will be temporarily disabled by an electronic message transmitted remotely and automatically, by the authorities, to its black box, so that the robot will not function, beyond announcing to its user that its insurance cover is inadequate or has expired. The monitoring system should not only disable uninsured robots, it should also report to the authorities any attempt to bypass the monitoring technology.

My proposal is that, when a robot is sold by its manufacturer, an appropriate insurance premium should be levied for its first year of operation. That premium is included in the robot’s selling price and until it is paid the robot will not function. One month or so before the paid insurance premium expires the robot’s owner would receive a warning message, from the insurer, that its insurance needed to be renewed. When the renewal payment is received by the insurer an electronic message is sent to the robot’s black box, instructing it to permit the continuing operation of the robot. But if the owner fails to renew the insurance by the due date the robot will cease to function, instead announcing “Please renew my insurance before you try to use me again.”

Fundamental to my newly extended proposal of no-fault mandatory insurance is the need for a system of classification of robots according to their risk potential, as well as a system of registration (Borges, 2018). The classification of different types of robot will provide insurers with one of the factors to be taken into consideration when estimating risks and setting premiums. The registration of a robot would ensure the existence of a link between the robot’s unique registration number and its insurance policy and backup compensation fund.

Ryan Calo (2011) makes a complementary point, linking the type of robot to an appropriate level of insurance payout.

“The level of insurance should depend on the nature of the robot being insured. Many robots – for instance, small robots used primarily for entertainment – would only need to be insured minimally, if at all. Larger robots with more autonomous functioning – for instance, security robots that patrol a parking lot – would require greater coverage.”

*****

I submit that the rationale behind my 2012 proposal is still valid today, and can serve as the basis for my additional suggestions later in this section of the paper.

“One of the beneficial effects of employing a wealth of historical robot accident data as a guide to setting insurance premiums, is that robot developers and manufacturers will feel the effects on their corporate bottom lines if their products are more accident prone than the average. The robot insurance system I envisage will be self-regulating for the robot industry, as makes and models that are accident-prone will rapidly attract higher insurance premiums, thereby pushing up their retail costs and encouraging consumers to purchase products with better safety records. Similarly, robot owners will have a financial incentive to take care in how they use their robots, since they could suffer higher premiums through the loss of their no-claims bonuses.

In order to enforce mandatory robot insurance requirements on the owner of a robot, there will need to be heavy legal sanctions against owners and/or operators of robots who are not adequately insured or who are not insured at all, but who have somehow managed to bypass the disabling technology in their robot’s black box. If a clever computer expert manages to hack into the robot’s insurance renewal software and enable an uninsured robot to be used, they will be committing a criminal offence in the same way that car drivers in most jurisdictions are committing an offence when they drive without adequate insurance cover. The robot’s black box, while monitoring the status of the owner’s insurance policy, could transmit an alert message to the appropriate regulatory authorities, advising that an uninsured robot is in use, or an uninsured modification to an already insured robot, together with the owner’s name and contact details. The automatic generation of penalty notices for certain motoring offences is already commonplace, so a similar scheme for uninsured robots seems eminently reasonable, and in the most serious cases a message transmitted to the police to assist them in identifying and tracking down errant robot owners.

In the case of motor insurance, in many jurisdictions there are some types of vehicle that are deemed not to be motor vehicles for the purposes of insurance requirements. In the UK this includes lawnmowers, and some electrically assisted pedal bikes. These exclusions provide a model that can readily be extended to robot insurance. Some toy robots designed for children, for example, will not need to be insured because the risk of serious injury or death being caused by such a toy is very slight, and warning labels can easily be affixed to the toy to cover such possibilities as swallowing removable parts.

One challenge to a system such as I propose is presented by consumers who, after their initial purchase, subsequently buy hardware or software add-ons that affect the risks posed by their robot. In order to cater for these possible dangers a robot’s software could be programmed to detect any such add-ons and send an electronic message to the insurer’s “actuary” – a piece of software rather than a human actuary – which would then calculate the appropriate change in premium, and the owner would be informed accordingly. Even more dangerous could be robot enthusiasts who possess the technical knowhow to modify their robots in all sorts of ways, again creating the possibility of an increased risk, by turning an innocuous robot into a dangerous one. In such cases the robot’s owner would be compelled by its black box to self-authenticate by filling out an electronic form with details of the modification, in order that an appropriate increase in insurance premium could be calculated. An owner who is found to have given false information regarding their modification would be liable to prosecution in the same way as a motorist who gives false information on the application form for a motor insurance policy.” (Levy, 2012)

6.1.Assessing insurance risks – The underwiters

In 2013 Carl Benedikt Frey and Michael Osborne, in article entitled “The Future of Employment: How Susceptible are jobs to computerization?”, predicted that insurance underwriters were at a 99% risk of being replaced by machines.1919 Now, at the start of 2020. It can not be very much longer before an artificial intelligence, an artificial actuary, is able to source and crunch the relevant data and provide an insurance company with the required risk data and premium calculations, tasks currently carried out by human actuaries. What has changed since 2012 to bring this closer?

In the past few years Artificial Intelligence has come of age, thanks largely to dramatic developments in the field of machine learning. As a result AI will have huge impact on the financial sector in general, as well as on its workers, because of the enormous power created by these learning techniques. Today’s machine learning methods can process huge volumes of data and discover relationships and patterns, and then use those relationships and patterns to make accurate predictions. Dramatic falls in the cost of computer memory and massive increases in computer processing speeds have contributed to the enabling of such learning techniques, which a few years ago would have been impractical if not impossible.

Machine learning methods such as Deep Learning are increasingly being used by insurance companies to analyse data and develop risk prediction models (Sennaar, 2017), and will lead to greater accuracy in calculating premiums as a result of considering an increasing number of types of data and taking more variables into account. Such learning methods are being employed not only for setting insurance premiums but also for stock market prediction, mortgage consideration, and in various other divisions of the financial sector.2020

The value of machine learning to the actuarial profession is already widely accepted amongst the profession’s members, for example Pietro Parodi (2016a) and Talitha Kirchner (2018). Parodi extols the virtues of machine learning for automating many financial processes, including rating the risk factors selection process in the insurance industry. In order to price an insurance policy the actuarial process must first identify those risk factors which affect just how risky a policy is likely to be for the insurer. If an insurer employs a sub-optimal set of risk factors, because, for example, he uses too few factors, he will soon find himself undercharging on the premiums and therefore attracting the worst risks (Parodi, 2016b). So identifying and rating the various possible risk factors is a crucial step in developing a sound system for pricing insurance premiums.

Factor identification is a task at which machine learning now excels, as is the importance rating of the various factors.2121 The Chess playing program AlphaZero, developed by Demis Hassabis, David Silver and their team at Google Deep Mind, was taught the rules of the game but given absolutely no knowledge of the principles involved in playing it well – the factors that could enable it evaluate positions accurately and to select good moves (Silver et al., 2018). During its development AlphaZero was set to play against itself, starting as a Chess ignoramus, and within 24 hours it had played millions of games and identified for itself, by means of reinforcement learning techniques, the factors that are important in playing good chess. And it had rated those factors so accurately that it had become, overnight, the strongest chess player the world has ever seen, human or computer. AlphaZero had created its own form of chess knowledge – worked out for itself which factors in Chess are important and the relative importance of each. When such techniques and other methods of machine learning are applied in the actuarial world, they will enable insurance software to create its own knowledge, to supplement whatever factors have been contributed by its human masters. The software will be able to identify what factors are important in pricing premiums, including robot insurance premiums.

Given that reinforcement learning, deep learning, and other weapons in the machine learning arsenal are readily available to actuaries, and new or improved learning strategies are appearing in the literature on a weekly, almost daily basis, the actuarial profession already has at its disposal the methods needed to bring my proposed robot insurance scheme to fruition. But how can current machine learning methods be employed to enhance my proposal of 2012?

“Huttunen et al. point out that in the early days of a robot insurance system it will be necessary to provide information about the possible risks so that the insurance market will be able to determine the price for those risks, in other words the appropriate insurance premium. Since it will take some time to build usefully large databases of historical robot accident data, it is likely at first that robot insurance schemes will be expensive in order to protect insurers against errors in their risk profiles. Eventually though, sufficient historical accident data will be available to create a stable market.” (Levy, 2012)

Notwithstanding Huttunen et al.’s caveat, robot insurance, including the insurance of robot cars and drones, has the potential to become a significant portion of the insurance market. It will therefore be attractive to insurers with deep pockets to invest in the creation and early years of the robot insurance market, and to suffer whatever teething difficulties exist in that market, in order to stake their claim to becoming a significant player. Every policy in that market will be a data point, and as sales of robots flourish, so the “big data” on which machine learning feeds will gradually become available to insurers. Some factors for inclusion in an insurance pricing model will be obvious: the type of robot, the capabilities of the robot (to what extent can it move?); is it for domestic use only (e.g. a vacuum cleaner) or is it likely to encounter the population at large (e.g. a driverless car or a drone), and data about the robot’s owner or operator. Insurers will need to keep track of factors such as the number and severity of accidents of certain types that can befall different categories, makes and models of robot, so that premiums for the more accident-prone products will be greater than those for robots with an accident rate closer to or lower than the average. The use of such data in setting premiums will hopefully lead to manufacturers striving to their utmost to develop the least accident-prone products, in order to allow for lower insurance premiums being added to the purchase price paid by their customers. And prospective customers will be able to consult accident statistics for the various makes and models they are considering purchasing.

7.Summary, conclusions and predictions

In the coming years the rapid growth in the numbers of robots being manufactured and sold will bring with it commensurate increases in the number of accidents and new risks to society. In order to mitigate these risks and accidents there would seem to be only two distinct options: the legal system – specifically tort law – and some form of insurance.

The legal system presents accident victims with inconvenience, often with the problem of funding a legal action, and often with long delays in achieving compensation. Robot accident lawsuits and court cases will require expert witnesses who are able to explain to judges and juries technically complex issues related to computer software, electronic hardware and artificial intelligence, and it is highly questionable as to how much of the expert witness testimony will be understood by the average juror, and even by judges.

The courts will be utterly swamped, not to mention the enormous difficulty and cost involved in determining fault in every case, and especially in those cases where an autonomous robot can learn and make its own decisions. “Very nice work for the lawyers, but not much joy for the courts system.” (Levy, 2012).

Of the various forms of insurance that exist or have been proposed, the two which seem to be most likely to attract enthusiasm from robot owners/users, are a compensation fund and a system of no-fault insurance. Reliance on a compensation fund alone would pose its own risk, since too may accidents might ultimately result in a fund being unable to meet all its payout commitments.

My conclusion therefore is that the intelligent no-fault insurance model proposed here, backed by an insurance/compensation fund, would be best suited to society’s increasing adoption of robots. Such a system is now possible, thanks to major developments in the field of machine learning during the past few years. And such a system will be more convenient for accident victims, it would make it considerably less expensive for them to seek compensation, and it would lead to much faster payouts than when waiting for a legal process to run its natural course.

I predict that:

  • (a) Within very few years the world will witness the first catastrophic drone accident – a collision with a passenger aircraft – and such disasters will then occur with increasing frequency until governments introduce laws drastically limiting the use of drones.

  • (b) Within a few years driverless cars will become mainstream, and while motoring accident statistics might well reduce as a result, the courts will be unable to deal satisfactorily with driverless car cases because the technical complexities will be beyond the understanding of judges and juries.

  • (c) The same applies to accidents involving other types of robot. Judges and juries will not be equipped to dispense justice, and there will be insufficient suitably qualified expert witnesses to assist at the many court cases arising from tort litigation.

  • (d) No-fault insurance will rapidly become the preferred means of dealing with robot accidents – preferred by accident victims and by the robot supply chain. And having become the preferred means it will become mandatory within a few more years.

  • (e) Machine learning techniques will rapidly become an essential element of actuarial calculation and decision-making, to the benefit of the purchasers of robot no-fault insurance as well as to the victims of robot accidents.

Notes

1 A tort, in common law jurisdictions, is a civil wrong that causes a claimant to suffer loss or harm resulting in legal liability for the person who commits the tortious act.

2 In relation to this concept, see also Asaro’s comment in Section 2, p. 5.

3 “In 1986 Congress established the National Vaccine Injury Compensation Program (NVICP), essentially no-fault insurance against possible injuries from the seven pediatric vaccines children were mandated to receive in order to attend school in the United States. Beginning in the mid-1970s, a steady increase in numbers of vaccine-related lawsuits against manufacturers, as well as in the sizes of awards to plaintiffs, led to withdrawal of many companies from vaccine manufacturing and marketing. The NVICP was an attempt to compensate families of children adversely affected by government-mandated vaccines and to shore up the vaccine industry by eliminating liability risk through imposition of a vaccine excise tax.” (NCBI, 1996)

4 First-party insurance is purchased to cover the named policy holder against damages or losses suffered by the policy holder to his person or property.

5 “Loss of consortium” is a term used in the law of torts that refers to the deprivation of the benefits of a family relationship due to injuries caused by a person who commits a tort.

6 Third-party insurance is a form of liability insurance purchased by an insured (the first party) from an insurer (the second party) for protection against the claims of another (the third party). The first party is responsible for the damages or losses of the third party, regardless of the cause of those damages.

7 “Strict liability” in tort law, is the imposition of liability on a party without a finding of fault (such as negligence or tortious intent). The claimant need only prove that the tort occurred and that the defendant was responsible. The law imputes strict liability to situations it considers to be inherently dangerous. (Wikipedia)

8 A treatise issued by the American Law Institute, which was adopted by the American courts in 1964, summarizes the general principles of (common law) United States tort law This treatise, known as the “Restatement (second) of Torts”, discusses (in Section 402A) strict liability for defective products, and imposes strict liability for injuries caused by “any product in a defective condition unreasonably dangerous to the user or consumer.” Hubbard points out that “even though a corrective system like tort needs a definition of wrong, no clear definition or test of ‘defective condition unreasonably dangerous’ was provided. Moreover, the reasons given for adopting strict liability were questionable, …In order to avoid the problems resulting from Section 402A, any proposal to impose no-fault liability for accidents caused by fully autonomous cars needs to provide a test for determining which accident costs will be imposed on sellers. “All” driverless automobile accidents would impose such a high level of actual or potential liability that innovation is likely to be severely hindered, particularly in an environment where many cars are still driven by humans. Like any no-fault insurance scheme, a no-fault insurance-like liability scheme must address coverage issues, including, for example, which accidents are covered. Vague references to “comparative fault” (as a way to address, for example, the specific “circumstances of the driver [passenger in charge]”) do not address this problem. Simply referring to the manufacturer’s ability to spread the cost ignores these tasks as well as the reasons for abandoning cost-spreading as a basis for products liability. Concern for victims is important. However, if a no-fault spreading scheme is desired, it is much better to use a first-party scheme like no-fault automobile insurance, which does not require a test of wrongdoing and is cheaper to administer than a third party liability system.” (Hubbard, 2014) [My emphasis – DL]

9 Also known as self-driving cars and driverless cars, inter alia.

10 Accidents will still be possible when a human pedestrian or cyclist is at fault, though the incidence of such accidents will also fall as driving technology will become better equipped than human drivers to anticipate and react to dangerous pedestrian and cyclist behavior.

11 I.e. based on forecasts rather than historical data.

12 Although I propose that a robot should stop working if its insurance premium is not up to date, there will undoubtedly be hackers who can override such systems, and will do so despite the risk of criminal prosecution if discovered.

13 The Product Liability Directive – A directive adopted by the European Union in 1985, setting out the EU-wide no-fault liability regime for defective products. As a directive, it has been implemented by member states of the EU, and their national courts enforce the directive in line with the relevant domestic laws that implement it.

14 Suspending payments in this way creates a similar type of insurance to pay-as-you-drive (see p. 12).

15 A person who commits a tort.

16 The New Zealand scheme is administered by the Accident Compensation Corporation, in accordance with Accident Compensation Act of 1972 (Act No. 43/1972). See Palmer (1979).

17 Although tort claims for personal injuries are barred by the scheme, claims for punitive damages are allowed, as are claims for damages to property, though the New Zealand courts normally award punitive damages only for reckless intentional wrongdoing or negligence.

18 See p. 11.

19 Along with data entry keyboarders, library technicians, new accounts clerks, photographic process workers and processing machine operators, tax preparers, cargo and freight agents, watch repairers, mathematical technicians, hand sewers, title examiners, abstractors and searchers, and telemarketers.

20 One problem with gaining public confidence in AI is that many people do not trust some of the results of AI calculations and decision making, with the result that “explainable AI” is currently a fast developing field. The aim is to enable an AI to explain to its developers and users, and in the fullness of time to Joe Public as well, what was the logical basis for a particular decision or calculation. The better an observer’s perception of an AI’s “thinking”, understanding the reason behind the AI’s decision, the more likely the observer will be to accept the AI’s decision making.

21 Factor identification comes from the recognition of patterns referred to above.

References

1 

Asaro, P. ((2011) ). A body to kick. But still no soul to damn. In P. Lin, K. Abney and G.A. Bekey (Eds.), Robot Ethics: The Ethical and Social Implications of Robots (pp. 169–186). Cambridge, MA: MIT Press.

2 

Asaro, P. ((2016) ). The liability problem for autonomous artificial agents. In 2016 AAAI Spring Symposium Series. Accessed at: https://www.aaai.org/ocs/index.php/SSS/SSS16/paper/viewPDFInterstitial/12699/11949.

3 

Baumann, M.F., Brändle, C., Coenen, C. & Zimmer-Merkle, S. ((2019) ). Taking responsibility: A responsible research and innovation (RRI) perspective on insurance issues of semi-autonomous driving. Transportation Research Part A, 124: , 557–572.

4 

Bertolini, A. ((2016) ). Insurance and risk management for robotic devices: Identifying the problems. Global Jurist, 16: (3), 291–314.

5 

Beyer, D.K., Dulo, D.A., Townsley, G.A. & Wu, S.S. ((2014) ). Risk, product liability trends, triggers, and insurance in commercial aerial robots. In WE ROBOT Conference on Legal & Policy Issues Relating to Robotics, University of Miami School of Law.

6 

Borges, G. (2018). New liability concepts: The potential of insurance and compensation funds. University of Saarland. Accessed at: https://www.rechtsinformatik.saarland/images/pdf/vortraege/2018-04-12-Borges_New-Liability-Concepts_online.pdf.

7 

Calo, R. ((2011) ). Open robotics. Maryland Law Review, 70: , 571–613.

8 

Cauffman, C. ((2018) ). Robo-liability: The European Union in search of the best way to deal with liability for damage caused by artificial intelligence. Maastricht Journal of European and Comparative Law, 25: (5), 527–532. doi:10.1177/1023263X18812333.

9 

Eastman, A.D. ((2016) ). Is no-fault auto insurance the answer to liability concerns of autonomous vehicles? American Journal of Business and Management, 5: (3), 85–90.

10 

Engelhard, E. & de Bruin, R. (2017). EU common approach on the liability rules and insurance relating to connected and autonomous vehicles. Annex 1 (final report). (Note: The Engelhard & de Bruin report is Annex 1 of an EU report with an almost identical title: “A common EU approach to liability rules and insurance for connected and autonomous vehicles.”)

11 

EU (2017). European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics, paragraph 57.

12 

Evas, T. (2018). A common EU approach to liability rules and insurance for connected and autonomous vehicles. European Parliamentary Research Service.

13 

FAA (2019). Where can I fly? Accessed at: https://www.faa.gov/uas/recreational_fliers/where_can_i_fly/.

14 

Frey, C.B. & Osborne, M. (2013). The future of employment: How susceptible are jobs to computerization? Oxford Martin Programme on Technology and Employment, Oxford.

15 

Funkhouser, K. ((2013) ). Paving the road ahead: Autonomous vehicles, products liability, and the need for a new approach. Utah Law Review, 437: , 459–462.

16 

Hubbard, F.P. ((2014) ). Sophisticated robots: Balancing liability, regulation and innovation. Florida Law Review, 66: (5), 1803–1872.

17 

Huttunen, A., Kulovesi, J., Brace, W., Lechner, L.G., Silvennoinen, K. & Kantola, V. ((2010) ). Liberating intelligent machines with financial instruments. Nordic Journal of Commercial Law, 2: .

18 

IFR (2019). World robotics report 2019. International Federation of Robotics.

19 

Jenkins, D. (2013). Insurability of UAVs: The “gorilla in the room”. Rotor News, Helicopter Association International.

20 

Keeton, R. & O’Connell, J. ((1964) ). Basic protection – A proposal for improved automobile claims systems. Harvard Law Journal, 78: (2), 329–384. doi:10.2307/1339098.

21 

Kirchner, T. (2018). Predicting your casualties – How machine learning is revolutionizing insurance pricing at AXA. Accessed at: https://digital.hbs.edu/platform-rctom/submission/predicting-your-casualties-how-machine-learning-is-revolutionizing-insurance-pricing-at-axa/.

22 

Levy, D. ((2012) ). When robots do wrong. In Conference on Advances in Computing and Entertainment (ACE), Kathmandu, 2012. Also in H. Samani (Ed.), Cognitive Robotics. Boca Raton, FL: CRC Press, 2015.

23 

Marryott, F.J. ((1966) ). The tort system and automobile claims: Evaluating the Keeton–O’Connell proposal. American Bar Association Journal, 52: (7), 639–643.

24 

NCBI ((1996) ). P. Harrison & A. Rosenfield (Eds.), Contraceptive Research and Development: Looking to the Future. Washington, DC: National Academy Press.

25 

O’Connell, J.R., Kinzler, P. & Miller, D. (2011). No-fault insurance at 40: Dusting off an old idea to help consumers save money in an age of austerity. National Association of Mutual Insurance Companies.

26 

Palmer, G. ((1979) ). Compensation for Incapacity: A Study of Law and Social Change in New Zealand and Australia. Wellington: Oxford University Press.

27 

Parodi, P. (2016a). Towards machine pricing. The Actuary, Predictions Supplement, Technology, Autumn.

28 

Parodi, P. (2016b). Basic concepts and techniques of the pricing process – Fact sheet. Chartered Insurance Institute. Accessed at: https://www.cii.co.uk/fact-files/actuarial/basic-concepts-and-techniques-of-the-pricing-process/.

29 

Rachum-Twang, O. ((2020) ). Whose robot is it anyway? Liability for artificial intelligence based products. University of Illinois Law Review (forthcoming).

30 

Reed, C., Kennedy, E. & Nogueira Silva, S. (2016). Responsibility, autonomy and accountability: Legal liability for machine learning. Paper presented at the Microsoft Cloud Computing Research Centre 3rd Annual Symposium, September 2016, Queen Mary University, London.

31 

Schroll, C. ((2015) ). Splitting the bill: Creating a national car insurance fund to pay for accidents in autonomous vehicles. Northwester University Law Review, 109: (3), 803–833.

32 

Sennaar, K. (2017). How America’s top 4 insurance companies are using machine learning. Accessed at: https://emerj.com/ai-sector-overviews/machine-learning-at-insurance-companies/.

33 

Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K. & Hassabis, D. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Accessed at: https://arxiv.org/abs/1712.01815.

34 

Thierer, A. & Hagemann, R. ((2015) ). Removing roadblocks to intelligent vehicles and driverless cars. Wake Forest Journal of Law & Policy, 5: , 339–391.

35 

van Rossum, C. (2018). Liability of robots: Legal responsibility in cases of errors or malfunctioning. Faculty of Law. Accessed at: https://lib.ugent.be/fulltxt/RUG01/002/479/449/RUG01-002479449_2018_0001_AC.pdf.

36 

Wikipedia (2019). Regulation of unmanned aerial vehicles. Notes 5–46, https://en.wikipedia.org/wiki/Regulation_of_unmanned_aerial_vehicles.

37 

IDC (2018). Worldwide semiannual robotics and drones spending guide. International Data Consortium, IDC, Framingham, MA.

38 

Yeomans, G. ((2014) ). Autonomous vehicles. Handing over control – Opportunities and risks for insurance. Lloyds Exposure Management, London. Accessed at: https://www.lloyds.com/news-and-risk-insight/news/market-news/industry-news-2014/autonomous-vehicles-handing-over-control.

39 

Yoshikawa, J. ((2019) ). Sharing the costs of artificial intelligence: Universal no-fault social insurance for personal injuries. Vanderbilt Journal of Entertainment and Technology Law, 21: (4), 1155–1187.