Uncontrolled AI: Navigating Ethical Dilemmas and Shadow AI Risks

Generative AI has captured the world’s imagination, evident in its remarkable adoption rate and popularity worldwide. 

According to a Deloitte survey, one in four UK citizens have dabbled in Generative AI. The research also found that nearly a third of these adopters did so for work purposes.

But here’s the statistic that should fuel discussion among policymakers and risk managers: Despite the widespread usage and adoption rate across all respondents, only 23% believe their employer would approve of them using Generative AI for work purposes.

A Gartner report predicts that by 2027, 75% of employees will turn to AI to augment efficiency without IT oversight. 

I hope their algorithm is off. 

Uncontrolled AI: Navigating Ethical Dilemmas and Shadow AI Risks

Just Ban It?

In a classic example of risk avoidance, many companies have restricted ChatGPT and its fellow AI-Gen apps in their work environment. But is anybody listening to the rules?

Recent Dell research found even higher and more worrying statistics of AI usage than the Deloitte report. Dell’s research indicates that 91% of survey respondents used generative AI in some form in the last year, and 71% have used the technology in their workplace. 

What is Shadow Artificial Intelligence?

Shadow AI denotes the unauthorized or ad hoc use of generative AI within an organization outside official IT governance protocols. 

The term “shadow AI” captures the clandestine nature of these AI operations, conducted discreetly and under the radar.

Which Shadow Does This Remind You Of?

Shadow artificial intelligence (AI)  shares similarities with shadow IT, which is the unauthorized use of technology and software within organizations. While shadow IT typically involves employees bypassing official IT rules to access tools and services, shadow AI involves the clandestine adoption of AI tools without proper oversight.

In both cases, employees seek out unsanctioned AI solutions to address specific needs or enhance productivity. This can lead to various risks, including data security breaches, compliance issues, and overall challenges in digital risk management.

Familiarity with the subject is one area where shadow AI is riskier than shadow IT. Most IT professionals and developers understand the risks of shadowing IT risk assessments very well, but the same cannot be said for AI. 

Another disturbing aspect of shadow AI is its tremendous ease of adoption. The user must find a vendor and set up a contract or license with most digital IT services. But AI is so accessible and so hard to track that to an unknowing employee, it’s as harmless as checking the news on your favorite news outlet. 

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Learn more about Shadow AI Risks

The Difficulty of Governing Hidden Entities

Nick Clegg, a former British deputy prime minister and president of global affairs at Meta, made an interesting analogy regarding AI governance. He compared the difficulty of developing AI governance frameworks to reign in on “uncontrolled AI”  to building a plane already in flight. 

But at least you see the plane. 

Shadow AI, or unsanctioned AI, is inherently more difficult to govern than “open” AI. While most employees mean well when they use these unsanctioned tools to pursue efficiency, it exposes companies to a new breed of cybersecurity and data privacy risks. The risks of shadow box AI may be similar to those we’ve been hearing about in AI and generative AI. 

Still, the risk parameters and ethical dilemmas around it are circumstantially different.

Experts predict that 2024 will be a big year for AI policies and standards. But what that spells out for the murky world of shadow AI is still uncertain.

Risks of AI Shadow IT 

While most AI tools seem harmless to your unassuming employee, they pose significant risks to organizations, ranging from data security breaches to compliance headaches. Let’s delve into the shadows and explore the risks associated with shadow AI.

1. Unintended Exposure of Sensitive Information

Employees may unknowingly feed confidential data into AI tools like ChatGPT or Google Bard, assuming these tools are harmless productivity aids. However, without proper vetting and approval by the organization, there’s no guarantee of where this data will eventually land. 

Unsafe AI tools may use company information to train models or could even fall victim to cyberattacks, leading to data leaks. For example, OpenAI’s chatbot data leak highlights the potential dangers lurking in the shadows of AI usage.

2. Lack of Awareness and Mitigation

Another critical risk stems from the undercover nature of shadow AI usage. Because organizations are typically unaware of these tools being employed, they cannot assess the associated risks or take effective steps to mitigate risks

According to Gartner’s research quoted above, many employees engage in shadow IT practices, and the trend is expected to escalate in the coming years. This lack of visibility into AI tool usage leaves businesses vulnerable to unforeseen consequences.

3. Privacy Policy Discrepancies

Each AI tool has its own privacy and data retention policies. How many employees read these before using the AI tools? To make matters worse, these policies may evolve. Entrusting employees to navigate these complexities independently can lead to compliance challenges down the line. 

Robust third-party due diligence must assess AI applications to mitigate potential privacy policy discrepancies and safeguard organizational data.

4. Vulnerabilities to Prompt Injection Attacks

AI tools based on large language models (LLMs) are susceptible to prompt injection attacks, where malicious inputs cause them to behave unexpectedly. This vulnerability poses a significant threat as AI systems gain more autonomy and agency in organizational environments. For instance, an AI email application could inadvertently disclose sensitive information or facilitate account takeovers, potentially compromising critical assets and operations.

5. Implications for Consumer Privacy

Lastly, the risks associated with shadow AI extend beyond internal operations to encompass consumer privacy concerns. Organizations must consider the ramifications of exposing customer data or intellectual property to unauthorized AI tools. 

How To Manage the Risks of Shadow AI

The focus has to be on visibility, risk management, and strategic decision-making.

To mitigate the risks and ethical dilemmas of shadow IT, particularly in the context of AI, here are some steps you can take:

  1. Establish Policies and Governance Strategies

Create a centralized AI implementation and governance strategy. Executive leadership should be actively involved in defining use cases, working with IT to ensure secure access, and establishing data protection protocols. This helps enforce accountability and ensures consistency across the organization.

  1. Data and Use Case Classification

Classify data and use cases based on their sensitivity and importance. Identify data that must be protected, such as trade secrets, sensitive processes, personally identifiable information, etc. Avoid exposing such data to public or hosted private cloud AI offerings. Reserve AI usage for solutions where you retain complete control or opt for enterprise-ready AI solutions with stringent data protection measures

  1. Keep it Local

Consider adding AI capabilities to your data rather than sending it to external AI platforms. This approach offers advantages regarding data control, security, and compliance. Keeping data on-premises or within controlled environments ensures secure access and minimizes the risk of unauthorized access or breaches.

  1. Educate Employees

Provide employees with training and education about the risks associated with shadow IT and the importance of following organizational protocols for AI usage. Encourage employees to report any instances of unauthorized AI usage or data exposure. Emphasize ethical AI considerations and the potential consequences of uncontrolled AI usage.

  1. Implement Access Controls

Implement access controls and monitoring mechanisms to track AI usage within the organization. Use identity and access management solutions to restrict access to AI tools and platforms based on user roles and responsibilities.

  1. Regular Audits and Assessments

Regular audits and assessments should be conducted to identify any instances of shadow AI usage or potential security vulnerabilities. Use penetration testing and vulnerability scanning tools to identify and address security risks proactively.

  1. Collaborate with IT and Security Teams

Foster collaboration between IT, security, and business teams to ensure alignment on AI usage policies, security protocols, and risk management strategies. This interdisciplinary approach helps address potential security gaps and ensure compliance with regulatory requirements.

  1. Stay Updated on Emerging Threats

Stay informed about emerging threats and vulnerabilities associated with AI technologies. To stay ahead of potential risks, monitor industry developments, participate in relevant forums, and collaborate with security experts.

We’ll Get There

As large language models (LLMs) expand, advances are being made daily. Like the evolution of cloud computing and DevOps, these LLM applications are expected to improve over time and become more secure as innovations drive enterprise-level progress.

2023 marked a surge in generative AI exploration as the world tried to understand this “new thing.” 

The focus for 2024 is expected to shift towards proactive AI governance. We’ve had a year of exploration, and the emphasis now lies on organizing ourselves to govern AI effectively in the new “alternate reality.”

As Max Tegmark, president of Future of Life Institute, which advocates for AI governance, remarked, “This business about AI’s loss of control is suddenly being taken seriously. People are realizing this isn’t a long-term thing anymore. It’s happening.”

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Looking to learn more about Shadow AI Risks?
Skip to content