Shadow AI poses several risks to firms beyond data leakage and violations of governance.
By Johan Steyn, 26 July 2023
Published by BusinessDay: https://www.businesslive.co.za/bd/opinion/columnists/2023-07-26-johan-steyn-beware-of-free-ai-tools-lurking-in-the-shadows/
“Should we allow our staff to use ChatGPT?” This question is the most prominent theme that arises in discussions with my clients lately. A new wave of freely available generative artificial intelligence (AI) tools is flooding the market. These include chat and search programs, artwork generators, writing tools, visual editors, audio tools and speech-to-text platforms.
Unlike traditional AI models that focus on prediction and classification, generative AI is dedicated to developing algorithms capable of producing entirely new and original content. These algorithms employ probabilistic approaches, enabling them to generate fresh instances that mirror the characteristics of the original data, often exhibiting creative and inventive behaviour beyond their explicit design.
Organisations face new challenges and opportunities in adopting AI solutions. One such challenge is the emergence of “Shadow AI” models, which refers to the unauthorised use of AI services within a company without proper governance and oversight. While Shadow IT has been a long-standing concern for organisations, it is manifesting much faster due to the rapid adoption and spread of this technology.
As AI becomes more accessible and democratised, departments such as marketing, supply chain and HR may independently deploy AI solutions without proper central control. This decentralisation can lead to the proliferation of Shadow AI models within an organisation.
Shadow AI poses several risks to businesses beyond data leakage and violations of data governance. By circumventing established governance mechanisms, Shadow AI reduces transparency and accountability, making it difficult for organisations to maintain control and supervision over AI systems.
Apple and Samsung have taken steps to prohibit their employees from using AI-powered services such as ChatGPT and Github’s Copilot. Amazon and JPMorgan Chase also imposed restrictions on the use of ChatGPT due to concerns about potential regulatory issues and the sharing of confidential data.
Other prominent banks such as Bank of America, Citigroup, Deutsche Bank, Wells Fargo and Goldman Sachs have joined in, implementing bans on the use of AI chatbots by their staff. These actions demonstrate a growing trend among companies and financial institutions to exercise caution in handling sensitive information through AI-powered tools and platforms.
While outright banning unauthorised AI services might be tempting, it’s essential to address the root problems that lead to the adoption of Shadow AI. An organisation’s approach should focus on education, transparency, providing authorised alternatives, and bolstering governance and compliance measures. Blanket bans can be counterproductive, but a strategic response that has been well thought out is necessary.
Business leaders should regularly review the AI systems being accessed by their staff and think about the reasons behind this. It could be that people are simply playing around with these tools, but it is also possible that they have found tools that enable them to perform their tasks better. The question should be about why the current technology platforms are inadequate and leaders should be open to new ideas with interest and care.
Effective management of Shadow AI requires a holistic approach encompassing a comprehensive governance strategy and well-defined policies. Many organisations are shifting towards a centralised platform for AI deployment, rather than relying on siloed solutions within departments.
A centralised approach allows for better monitoring, control, scalability and deployment of AI solutions, reducing the prevalence of Shadow AI. Access control, monitoring, deployment management and security measures are essential components to mitigate the associated risks.
Comments