top of page
Johan Steyn

BusinessDay: Vital now for firms to practise responsible AI

The wellbeing of employees and the treatment of customers can result in a destructive downward spiral if AI is not controlled

By Johan Steyn, 8 March 2022


In 1942 a professor at Boston University coined the famed principle of the Three Laws of Robotics. Isaac Asimov, considered one of the best science fiction writers of his time, introduced the laws in his short story, “Runaround”.


The laws were: “1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given by human beings except where such orders would conflict with the First Law. 3) A robot must protect its existence as long as such protection does not conflict with the First or Second Law.”


Asimov could not have imagined the time we live in currently. He could not have foreseen a time when algorithms control the world. Asimov was writing about androids — physical robots — like those we see in production plants or movies.


The family of smart technologies that form part of the artificial intelligence (AI) era, incorporating computers that can see, sense, think, learn and predict, is already a substantial part of our everyday lives.


It is also an increasingly important part of business strategy. Businesses of all types, sizes and industries are exploring how to benefit from these new technologies. Many firms across the world are already very far down the path of successful implementation.


AI technologies are incredibly powerful and these days the focus is on how commercial organisations are intent on using smart technology. The responsible use of AI technology has become a major topic in business strategy and literature. Concerns around unequal treatment, labour replacement and a lack of privacy and security — issues specific to AI — are legitimate. Current policies and legislation are insufficient to address many of these challenges.


Responsible AI is the practice of designing, developing, and deploying modern technology to empower employees and organisations and have a positive influence on customers and society.


Concerns about bias, discrimination, justice and explainability are relevant. And, while these problem areas have some formal definitions, putting them into practice requires difficult choices and application-specific constraints. As AI judgments have increased their influence and effect on people’s lives on a large scale, so has the enterprise’s responsibility to manage the potential ethical and socio-technical consequences of AI adoption.


The more decisions a firm delegates to AI, the more severe the dangers, including reputational, data privacy and health and safety concerns. The wellbeing of employees and the treatment of customers can result in a destructive downward spiral if not controlled.


The pillars of a strong responsible AI strategy include addressing issues of bias and fairness. It is possible for organisations to design AI systems in such a way that undesirable bias is mitigated and judgments are made fairly.


It is critical to develop a methodology that makes AI-driven judgments interpretable and clearly explainable to those who operate them and those who are affected by them. The primary objective is to assist organisations in developing AI that is compliant with applicable regulations, but, beyond that, ensuring the ethical use of technology.


To some extent, Isaac Asimov wrote in an age of technological of innocence. Modern algorithms are often delinquent felons that obey no laws, cause harm, disobey human orders with the potential to choose their own survival at the cost of human wellbeing. The stakes cannot be higher.

• Steyn is chair of the special interest group on artificial intelligence and robotics with the Institute of Information Technology Professionals SA.


Comments


bottom of page