In this Age of Ultron: AI, Civil Liability, and the Regulatory Framework in Nigeria

Mary Chantal Ifeatu Agbu

0 63

Based on the popular Marvel Comic Books, the movie, Avengers: Age of Ultron has as its villain, Ultron, an Artificial Intelligence (AI) Peacekeeping Program. The character plays unarguably on one of the greatest fears of mankind in the 21st century – technology. In the movie, Ultron – an intelligent sentient robot created to protect the world-, becomes hostile and turns against the human race, based on its belief that world peace can only be achieved through exterminating humanity. Its view was formed after an assessment of the sum total of human history through the internet and its connection to another AI system – Jarvis.

This theme is, without doubt, in keeping with the concerns surrounding Artificial Intelligence in today’s society. Now more than ever, we are surrounded by Artificial intelligence technology aimed at assisting and improving day-to-day activities. From Apple’s Siri to Tesla’s self-driving cars, IBM’s Watson and United Bank of Africa (UBA)’s ‘Leo’, the age of AI as commonplace in human life is already upon us. But is this a phenomenon to be feared? And do we have the capacity at law to regulate the activities and services provided by AI technology?

This article will examine the nature and uses of AI in today’s society as well as the legal issues posed by the rise of AI. It will focus particularly on the issues of civil liability surrounding AI while highlighting relevant case law. Lastly, it will look at the legal framework and efforts made by Nigeria and other countries towards legislating on AI and provide recommendations for same.

The European Parliament’s resolution of 16 February 2017, with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL), recognized that “the development of robotics and AI may have the potential to transform lives and work practices, raise efficiency, savings, and safety levels and provide enhanced level of services”.  These benefits of AI are and have been applied to a vast number of areas such as production, banking, commerce, transportation, healthcare, education, agriculture, etc. From faster medical diagnosis to contract review, AI has proved to be a transformational tool, outperforming humans in a range of activities.

AI is currently characterized by three technological developments – learning ability, robotics and interconnectedness.

Learning ability in this context refers to the fact that digital systems are no longer required to be completely programmed as they can now constantly learn and change their behaviour through the input of data by users and other factors from the outside world. This machine learning ability allows AI systems to learn from experience and solve problems using algorithms and sophisticated analytical techniques; thus reducing the programmer’s influence on the behaviour of the system and making such systems more and more independent from human supervisory control. This attribute certainly plays a role in the areas of contract law, liability law and intellectual property law.

Robotics is “the coupling of digital systems with physical sensors and actuators”. In simple terms, this means that AI can be used to program robots, which are then able to perform physical activities. This, however, can create new exposures to legal liability, given that such systems can cause real damage to life and property without any human intervention.

The third aspect – interconnectedness, refers to the increasing interconnectedness of digital systems. AI systems with direct hardware control are now directly connected to each other. This increasing level of connectivity brings forth new challenges concerning both safety and security.

From the current nature of AI, one particular issue that springs forth is the issue of civil liability. No doubt, the characteristics outlined above, whether by themselves or in combination, give rise to novel risks to which liability may follow. Take for example the case of Umeda v. Tesla Inc. Umeda, was killed in 2018 after being hit by a Tesla vehicle that “suddenly accelerated” while he was standing on the side of an expressway. Court documents allege that the Tesla car, which had its Autopilot system feature engaged, accelerated after the car in front of it switched lanes. The driver was said to have been sleeping shortly before the accident. The sleepy driver was also not issued alerts by the vehicle because his hands were on the wheel.

In such a case, serious issues of liability become apparent, thereby raising questions on the allocation of risks and liability. A logical starting point would then be to examine the nature of civil liability, more specifically, the nature of product liability.

Under the law of tort, product liability is used to determine liability for injuries or property damage resulting from products manufactured for sale. Traditionally, product liability claims are based on either negligence, strict liability, or breach of warranty of fitness.

Nonetheless, generally, the determining factor for establishing product liability is that of ‘fault’; thus, making negligence a default principle in product liability. Under the traditional negligence approach, an act is negligent if it creates “unreasonable risk of harm to people and property”. Along with causation and damage, negligence requires a duty of care and a breach of such duty of care. However, with the rapidly evolving AI technologies and the ever-increasing ability of such technology to act independent of their manufacturers and programmers, this then begs the question of whether negligent liability can be attached to product liability claims involving AI. This is due to the fact that the development potentials – and the attendant risks – of AI are largely unknown and, as posed by Zech, ‘there is no duty to avoid the unknowable’. This is especially significant when one considers the fact that foreseeability is a key ingredient in establishing negligent liability.

In some cases, the concept of ‘fault’ has been replaced by that of strict liability even where fault cannot be positively established. This is done in order to incentivize companies to increase their investments in product safety. Regardless of whether or not existing duties have been satisfied, strict liability assigns economic risk to the injurer; thus, prompting risk controllers to consider, before embarking on a project, whether the benefit of a risky activity outweighs its risk.

Strict product liability is predicated on the existence of “an unreasonably dangerous product whose foreseeable use has caused injury”. To establish a case of strict product liability, the claimant must prove that “the product sold was defective and unreasonably dangerous at the time it left the defendant’s hands; the product reached the plaintiff without substantial change; and the defect was the proximate cause of plaintiff’s injuries”.

One of the most compelling arguments against the application of strict liability to cases involving AI technologies is that it could possibly deter and inhibit innovation. An even more significant argument, however, is that due to the learning ability of AI systems, they could indeed stray so far from their original programming as to make their actions unforeseeable. Thus making the application of strict liability similarly ill-suited for handling cases involving AI.

Now that questions surrounding the nature of liability arising from the operation of AI systems have been discussed, the next question is: who should be held liable?

In answering this question, let us consider the case of Holbrook v. Prodomax Automation Ltd.

Wanda Holbrook was employed as a maintenance technician at Ventra Ionia LLC. In July 2015, Holbrook was killed after a company robot unexpectedly entered where she was working and crushed her head. Her husband subsequently filed a suit against several companies associated with the designing, manufacturing, installation and servicing of the robot. Though the case was ultimately settled via a summary judgment, one of the arguments raised by the defendants was that the negligence lies with Holbrook and not their product. Although it could appear to be a bit far-fetched, there might be some merit to this argument.

Notwithstanding that product liability determines defects at the condition of sale, it is pertinent to ask the question whether product liability should be attached to the manufacturers when a product is learning from its user and changing long after sale?  Based on this question, it becomes clear that in ascribing liability for AI systems, it is important to first identify all the parties involved in the operation of the AI system. For the purpose of this article, the parties identified will be the manufacturer, the programmer, the owner/user and the AI system itself.

For manufacturers, as previously discussed, liability is incurred “where a damage is caused wholly or partly by defective goods or the supply of a service”. The success of a product liability claim, involving AI, against a manufacturer would thus hinge on the condition at the time of sale and may preclude claims where the product was designed to specification but subsequently changed.

Similar to the manufacturer, a programmer may be liable for damages caused by AI technology “when the damage is related to software failures, faults or errors”. Where the programmer is employed by the manufacturer, the manufacturer vicariously assumes liability for damages. Where the programmer is external, the manufacturer still assumes liability, however, the manufacturer is entitled to bring an action of compensation for damages against the programmer.

The liability of the owner/user of AI technology arises from the learning ability of such systems. The machine learning ability creates a level of personalization that displaces the liability of the manufacturer and creates an assumption of risk by the user. In this regard, it has been  proposed that, status-wise, AI technology can be viewed in relation to their owners as agents (in the sense that AI technology acts with the apparent authority of its owner); as employees (thus attaching vicarious liability to its owner); or as pets (i.e., the owner of a wild animal as a pet is strictly liable for any damage caused by the pet).

However, there is also the view that the lack of restraints on the learning ability of AI can in itself be considered a design deficiency; thus, reverting liability to the manufacturer or programmer.

The last party to consider is the AI system itself. Though at first glance, it may seem absurd to attach liability to a non-human (in essence, a machine), this suggestion cannot be outrightly discarded.

As stated above, the actions of AI systems may go beyond the control of their programmers, manufacturers, or owners. Some experts have thus suggested the creation of a limited form of personhood for such technology, to which liability can be attached. Taking a page from the principle of the reasonable man on the Clapham omnibus, such liability can be characterized by a recognition of the “reasonable robot or of the autonomous Clapham omnibus”. Critics, however, argue that an unreasonable/negligent robot is simply a defective one, in which case liability comes full circle, back to the manufacturer.

In terms of legislation in other jurisdictions, the European Union has made laudable steps in the area of creating a civil liability regime for artificial intelligence, particularly through the “European Parliament Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL))”. This Resolution was created, inter alia, in recognition of the significant legal challenges posed by AI systems which make it almost impossible to identify who is in control of risk and which may in turn result in inadequate compensation for victims.

The Resolution makes several recommendations, some of which include:

  • Paragraph 6 which suggests that rather than completely revising existing liability regimes, same can be modified through updates.
  • Paragraph 8 which suggests that the current EU framework for product liability should continue to be used with regard to civil liability claims against the producer of a defective AI-system.
  • Paragraph 14 which suggests that a common strict liability regime be set up for high-risk autonomous AI-systems.
  • Paragraph 20 which suggests that “all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault-based liability” (i.e., Negligence liability); and
  • Paragraph 23 which suggests that proper liability coverage is essential for assuring public trust in new technology despite potential risks.

On the domestic front, though not as specific or extensive as that of the European Union, some applicable legislation exists or has been otherwise proposed towards creating a framework for regulation of AI systems. For instance, the Federal Competition and Consumer Protection Act (“FCCPA”), 2018 establishes a framework for product liability and gives persons affected by defective goods or services the right to sue.

Also, the Nigerian Data Protection Regulation (“NDPR”) 2019 gives data subjects certain rights in relation to the processing of their information by controllers. This includes provisions to the effect that Personal data of a Data Subject shall collected and processed with the freely given, specific, informed and unambiguous consent of the data subject. However, notwithstanding that AI systems are involved in processing data and transferring same to other systems without human intervention (by nature of their interconnectedness), it is doubtful whether they will be covered within the scope of the NDPR, as Data Controller is specifically defined to mean “a person”.

Furthermore, the Evidence Act 2011 specifically recognizes that documents can be produced by a computer with or without human intervention and such evidence is admissible as computer generated evidence (provided certain conditions are satisfied).

With respect to investments, the Security and Exchange Commission of Nigeria (“SEC”), recently released an exposure draft on its Proposed Rules on Robo Advisory Services. Under this proposed Rules, Robo advisers would be required to register with the SEC and comply with all the applicable business conduct requirements set out in the Investment and Securities Act (“ISA”) 2007. Robo Advisers would also be mandated to comply with the Anti-Money Laundering and Combatting the Financing of Terrorism Act of 2013. This is no doubt a step in the right direction and, if approved, may constitute one of the first AI specific legislation in Nigeria. Rules such as these will go a long way in ensuring investor protection in the face of the rapidly evolving ‘tech age’.

There is still, however, a need for a larger regulatory framework for AI particularly in the area of civil liability. It is recommended that this can be achieved by leveraging on already existing systems of product liability and adapting same to fit current needs. Strict liability can, for example, be combined with a compulsory third party insurance to evenly distribute risk among liable parties. It is however, apparent that a broad strict liability approach might not always be suitable, given the legal challenges of AI. Thus, a sector-by-sector approach can be explored, which will consider all attendant risks associated with the use of AI in each sector.

Most importantly, it is recommended that any policy decisions taken in respect of civil liability relating to of the operation of AI, should be predicated on in-depth research by experts in the field who would have a better understanding of the risks and consequences of AI systems.

In conclusion, AI has proven to be a transformational tool with the potential to improve our daily lives, the conduct of human affairs and indeed society at large. The technological developments of machine learning, robotics, and interconnectedness, which characterize the current nature of AI, however, pose some pertinent legal challenges particularly in the field of civil liability. Not only is it difficult to place AI within the ambit of the current product liability framework, but it is also difficult to ascertain who bears liability in AI related civil liability claims. Nonetheless, efforts have been made by other jurisdictions to provide some level of civil liability regime for AI.

Domestically, some of the existing Nigerian legislation may be interpreted to include AI systems and the SEC has made a valiant effort at possible regulation of the field in the area of capital market operations through its “Proposed Rule on Robo Advisory Services”. However, there is still a need and a lot of room for legislation on AI as a whole in Nigeria, and more specifically, on the issue of civil liability.

 

 

 


 

Mary Agbu is an associate at Lexavier Partners.

 

Leave A Reply

Your email address will not be published.