How to Make 'Shadow AI' a Good Thing

As if "shadow IT" wasn't bad enough, we now have "shadow AI" to contend with -- employees adopting advanced AI for their own use, outside of any corporate oversight or guard rails or formalized usage plans.

Organizations around the world are now struggling with how to leverage the astounding recent advances in AI, often at the direction of CEOs who equate the new age of AI with milestone advances like PCs, mobile phones and the internet (personally, I'm beginning to think it might more equate with the wheel, or fire). Quick test: how many readers' orgs suddenly have new "VP of AI" positions? Yeah.

However, setting up the infrastructure to create and run corporate AI initiatives can be time-consuming and, like other corporate initiatives, can be hobbled by bureaucratic machinations.

In the meantime, you know employees are out there doing their own AI things to help with their jobs. I certainly am, and you probably are, too. The thing is, that could lead to security, legal, compliance and other problems. So companies are trying to organize and rein in AI usage so a rogue employee like me doesn't somehow burn everything down. Some concrete steps to take in that direction are listed below.

Surprisingly, shadow AI isn't some brand-new thing made up after last year's debut of ChatGPT and other advanced generative AI systems based on large language models (LLMs).

Indeed, way back in June 2020, BMC Software published a blog post titled "What Is Shadow AI?" that states research firm Gartner was discussing the term as early as 2019: "In its report, Gartner predicts that by 2022, 30 percent of organizations that use AI for decision-making will have to address shadow AI as the biggest risk to effective decisions."

So add "risk to effective decisions" to the list of bad things shadow AI could cause.

BMC Software said effective strategies and policies to address shadow AI can be useful, listing:

  • Access Control. This should include organizational management over who has access to AI solutions, including models and data. An important aspect of access control is ensuring that access can be revoked whenever necessary, for example, when an employee leaves the organization.
  • Monitoring. Your organization should be able to track the usage of all AI solutions. This helps to determine its effectiveness and to ensure that it's being used appropriately.
  • Deployment. It's necessary to be able to manage the distribution of AI solutions as well as any new versions. A good deployment strategy also allows for tracing AI solutions back to the development process.
  • Security. Security is obviously a central issue when dealing with AI management. It's important that your organization is able to enforce security controls in order to avoid attacks or data breaches.

While prescient, BMC Software isn't exclusive: Many other people, companies and sites have published guidance to help enterprises get a handle on shadow AI. Using various such online sources, here is a list of steps that organizations can take to ensure that Shadow AI is used to benefit an organization rather than cause harm:

  • Educate and Communicate: Organizations should educate employees about the total cost of AI technology deployments, including the technical debt they cause. They should focus on educating users and managers about opportunities with AI and the challenges that AI technologies pose. Various sources (like this from The Sondergaard Group) indicate that simply banning "shadow" initiatives of any kind never works, especially for technologies where the price of implementation and use is low, and the value for the individual is high. In those cases, education and communication are more likely to result in simultaneous benefits to individuals and organizations.
  • Develop an Organization-Wide Approach: Given that AI is increasingly permeating all aspects of an organization, it is essential to cultivate a comprehensive governance strategy coupled with well-defined policies. Addressing shadow AI requires a concerted effort from all departments, from marketing and research to legal and HR.
  • Update Corporate Competency and Skills Development: As AI becomes a more integral tool within an organization, it's important to ensure that managers and employees understand how to use it effectively and ethically. HR departments should lead the way in updating corporate competency frameworks to include AI understanding and application.
  • Expand Risk Management to Include AI: Corporate risk departments should assess the risk associated with AI models and algorithms that are deployed without the knowledge of the central IT organization. This can help manage the potential of bias in data, inappropriate use of models and increased risk across the organization.
  • Develop Leadership Skills in AI Management: Managers need to understand AI in greater detail as individual employees use AI to support or replace tasks. They should be able to determine which AI tools and results are legitimate, understand who uses them, and evaluate whether the output or outcome is valid.
  • Managing Algorithms: Organizations should be prepared to manage a multitude of algorithms being used across different environments. Given that many AI environments are 'black boxes', understanding both the input and output of these solutions is crucial. In addition, with increasing legislation around AI, a central repository or model ops environment becomes essential.

Doing all that should result in more of those positive aspects of shadow AI, such as:

  • The potential to increase productivity and efficiency in organizations: Allowing employees and teams to come up with AI solutions tailored to their tasks can lead to more efficient and productive outcomes.
  • Boosting innovation: With the freedom to explore and implement AI solutions, workers and teams can generate unique and effective solutions.
  • Enhance employee morale and engagement: Being able to actively contribute to the solutions being used can lead to a more motivated and engaged workforce.
  • Technology democratization: Making technology more accessible without the need for extensive training, expertise or expense can result in more people having access to AI tools to help create their own solutions, fostering inclusivity and diversity in problem-solving.

So happy shadow AIing, just another example of how advanced AI systems can be used for good or bad.

About the Author

David Ramel is an editor and writer for Converge360.

Featured

Subscribe on YouTube