top of page
Johan Steyn

BusinessDay: When robots cause harm the case law is lacking

Business leaders should be aware of the potential legal risks when considering AI technology.

By Johan Steyn, 30 November 2022


A person is injured or killed by a self-driving vehicle. A building is damaged when an autonomous drone crashes into it. A software platform wrongly diagnoses and treats medical conditions.

A computer powered by artificial intelligence (AI) that reviews mortgage applications may be biased if it considers factors such as certain demographics. A robotic surgery system augmented with AI could potentially make a decision that endangers the patient during the operation.


Who will be held accountable for any harm that may happen as a result of an AI platform’s actions is an essential topic raised by the expanding use of such platforms across all industries. It encompasses production, manufacturing, transportation, agriculture, modelling and forecasting, education, and cybersecurity. AI is not entirely risk-free as there will be instances in which these systems make errors.


This is a crucial conversation to have and it raises many intriguing questions. Why does an AI system sometimes behave erratically? Is the system’s creators or administrators responsible for its mistakes? In the case of the drone, is it the manufacturer of the plane, the operators, or those who created the underlying algorithms? Do intelligent machines require legal representation?


I was recently training members of the legal team at a large local bank. They expressed concern as the bank is increasingly implementing AI systems and they need to get ready to understand the reach of the law in case things go wrong. What happens when a chatbot provides inaccurate financial advice? Will biases in the data sets cause discrimination against some people applying for credit? Who is to be held liable: the bank, its employees or third-party vendors?


I think legal teams in all industries are beginning to grapple with these issues. Autonomous systems are bound to cause errors and in some cases the damaging effects can be far-reaching. The sad truth is that there is little to go on as the case law is sparse. In the case of SA the case law does not exist — or so it seems in my view, and the bank’s team concurred — as regulation of this technology is lacking.


No-one may be held liable for any damage produced by an AI system operating in a manner that was wholly unexpected. Due to the lack of legislation specifically dealing with AI, people whose lives have been adversely affected by its errors may launch a negligence suit.


Under new international standards, the user of an AI system is less likely than the system’s developer to be blamed. There may be additional disagreements on the source of the AI system’s knowledge — the programmer, the designer, or the subject matter expert — as well as the degree of damage caused.


To insulate themselves from possible legal action, organisations that sell AI software and implementation services are likely to include a clause in their contracts that removes culpability for malfunction. Since the legality of these clauses has not been tested, the courts will have to determine what constitutes a reasonable exclusion clause. Due to a lack of precedent, it is difficult to predict how a court would strike this balance, which poses a significant risk for suppliers seeking to rely on such clauses.


Business leaders should be aware of the potential legal risks when considering AI technology. Our government should move swiftly on establishing regulatory frameworks that both encourage innovation and limit damage to its citizens.


• Steyn is on the faculty at Woxsen University, a research fellow at Stellenbosch University and founder of AIforBusiness.net


Comments


bottom of page