By Johan Steyn, 1 June 2021
Published by Business Day:
Our human ancestors first walked the earth about 300,000 years ago. If we look at the history of our planet as a clock, then mankind has only been around for one minute and 17 seconds. We are certainly late to the party, but we managed to make a huge impact on our planet. Unfortunately, our influence is overwhelmingly negative.
Since the dawn of the first industrial revolution in the 19th century, humans began extracting and burning fossil fuels on a massive scale. We face the edge of a climate-change cliff with extreme weather events, rising sea levels, glacier retreat, ocean acidification, ecosystem disruptions and extinctions.
The future of our planet is not the only extensive predicament facing our species. The human race is facing the largest technological change in our history. In the middle of the previous century, researchers began working on technology that could be smarter than humans. This was not a new idea as in centuries past there were myths of beings created by humans that could outsmart us in intelligence and consciousness.
At a computer science conference in 1956, a researcher named John McCarthy coined the term “artificial intelligence ”. A few years later the famous code breaker Alan Turing wrote a paper on the notion that computers could simulate humans to do intelligent things. The Turing Test was named after him, determining whether or not a computer is capable of thinking like a human being. Simply put, the test is about humans interacting with a computer, thinking they are dealing with another human.
Today we look back at the elementary starting stages in the development of artificial intelligence (AI). With the rise of computing power, digital storage capabilities, cloud computing and other advances, AI has quickly infiltrated our everyday lives. I often ask my audience when I speak on the topic, “How many of you have used AI today?” Normally very few people would put up their hands. I think the reason is that many still see AI as “something strange, out of reach”, but I then explain that almost every application on our smartphones is infused with AI technology.
Think about an application such as Apple’s Siri or Amazon’s Alexa. They use natural language processing to understand what we say. Think about Google Maps or Waze that can calculate the route to your destination in seconds, utilising an astonishing amount of data on route options and geospatial data from thousands of other users.
Our lives are filled these days with “smart things”. It is commonly referred to as the Internet of Things where devices such as your smart television, fridge, lights and security system can speak to each other and make decisions autonomously.
AI is everywhere we look. The technology promises great advancements in health care, education and service delivery. We could realistically create a better life for our children. However, all technological advances could be used for sinister intent and would not necessarily benefit humanity. AI has already been weaponised (think of smart drones), it is used for industrial espionage and hacking, and facial recognition can watch us everywhere like George Orwell’s Big Brother. Perhaps the most common and relevant fear is that smart technology will steal our jobs. Smart automation technology is already causing large-scale job displacements.
We are standing at the edge of a technological disaster cliff. This technology should be taken seriously. We should debate and agree on how to use it for good. We are all fools if we think that time is on our side.
• Johan Steyn is chairperson of the Special Interest Group on Artificial Intelligence and Robotics with the Institute of Information Technology Professionals of SA. He writes in his personal capacity.
Commenti