Complete Guide for Shadow AI: Risks & How to Stay in Control
What Is Shadow AI? Definition and Core Characteristics
Shadow AI is the use of AI tools by employees without the knowledge or approval of their organisation’s IT, security, or data governance teams. Defined by a lack of formal oversight and invisible data transfers, shadow AI means tools are being used without going through traditional procurement channels.
The knock-on effect of this is that these undocumented tools are less vigorously tested in terms of security reviews or vendor assessments. Shadow AI tools operate outside of usual security protocols and will typically lack controls like encryption or multi-factor authentication and leaves organisations exposed to the risks that come along with that lack of protection.
Free-tier, public AI models often retain data to train future AI models. Shadow AI means that these models will be trained on potentially confidential business data that they never should have had access to.
How Shadow AI Emerges in Modern Organizations
Shadow AI develops in a bottom-up, employee-driven adoption. As AI is increasingly added to new software applications, some employees could be using AI-enabled applications unknowingly.
Simultaneously, tech-savvy employees may be actively seeking the latest AI models and using them in many aspects of their personal lives. If these employees have an interest in increasing their productivity at work, it is reasonable to assume that some will incorporate AI into their workflow with or without their employer’s permission, especially if their organisation is slow on utilising the latest tech.
Coupled with the lack of formal oversight, shadow AI means enthusiasts may be using the newest untested, insecure tools to process business data to paper over the cracks of slow procurement.
Where Shadow AI Spreads and Which Teams Are Most Affected
The main areas where shadow AI is particularly prevalent are in the healthcare, financial services, and software development industries. A survey sponsored by Wolters Kluwer Health found that among 518 full-time healthcare professionals, almost 50% used unapproved AI tools for a faster workflow. 1 in 3 blamed this on a “lack of approved tools” or approved tools “lacking the desired functionality.”
The problem is even more acute in the financial services sector, where Zendesk reports that shadow AI use experienced a 250% YoY growth from 2024 to 2025. Software development also falls victim to shadow AI, with the developers admitting to using unauthorised IT tools to support their projects.
Within any given organisation, the teams most affected by Shadow AI are typically sales and marketing, HR, and product and engineering. Sales teams typically leverage unapproved prospecting platforms, while marketing teams are known to use AI to automate content and draft email responses.
Cledara describes Sales and Marketing teams as the biggest offenders and cites them as responsible for “65% of total unauthorised SaaS usage.” Human Resources teams are heavy users of AI, with sharing sensitive information such as resume analysis, predictive turnover analytics, and payroll automation with AI without authorisation. Engineering and product teams, like those at Samsung, are regularly relying on AI tools to assist them by pasting snippets of sensitive, proprietary data into a public chatbot.
Security, Privacy, and Compliance Risks of Shadow AI
The security, privacy and compliance risks of shadow AI are wide-ranging and compounding. By evading formal procurement protocols, they create the operational blind spots of “unknown unknowns”.
First, taking security, shadow AI risks data leakage by creating an irreversible transfer of data that public models will retain and use to train future models. The lack of basic enterprise security measures creates an expanded attack surface, allowing malicious actors more opportunities to access proprietary data. There is also the threat of credentials and secrets, such as passwords and API keys, being pasted into conversations, saved into provider logs, and exploited or sold.
The privacy risks of shadow AI are severe, as what can feel like a discreet conversation between an employee and an AI assistant can easily become a source of permanent data exposure. In the case of healthcare, HR or financial industries, this can mean personally identifiable information (PII) or protected health information (PHI) becoming exposed without an employee’s knowledge or a patient/client’s consent.
As for compliance and legal risks, being found in breach of regulations such as GDPR or HIPAA can result in substantial fines. Engineers who use shadow AI with public chatbots risk leaking trade secrets, while people using image or text models to generate outputs may find themselves unable to copyright work that has been created by AI and powerless to stop competitors from replicating it.
Why Shadow AI Is Hard to Detect and Control
Shadowing AI usage through unauthorised accounts is difficult for organisations for many reasons. Unlike shadow IT, which primarily focuses on the installation of unauthorised software, shadow AI is harder to detect and control because it centres around sensitive, and often high-stakes, data transactions. The existing suite of security tools, like firewalls, data loss prevention, and security information and event management solutions are not well equipped to answer the question of how, when and where AI tools are being used with company data.
With so much of enterprise AI usage happening through personal accounts, users often leave no trace in enterprise software registers. Additionally, when operating as browser plugins, AI tools can overlay existing and approved applications. This allows them to scrape data without needing to go through the process of installing anything new which helps them evade standard endpoint monitoring. AI models can also run locally or agentically, meaning they can avoid generating external network traffic.
Even in cases where shadow AI can be identified, controlling it is difficult. According to a ManageEngine survey, 85% of IT decision-makers say employees are adopting AI tools before they can be properly assessed by their IT teams.
Once integrated into an employee’s workflow, it may be very difficult to convince them to stop. Some employees may be so reliant on AI tools that they are not able to meet the standard of productivity or quality expected of them without it. A 2025 BCG survey found that 54% of respondents would use AI tools even if not authorised by their company – demonstrating how difficult it is to break this user behaviour once shadow AI has taken hold.
Strategies to Monitor and Reduce Shadow AI in the Workplace
To effectively manage Shadow AI, organizations should transition from a reactive approach, where they merely try to block instances of Shadow AI as they occur, to a proactive strategy. This strategy should focus on balancing visibility, governance, and employee empowerment. Reduction strategies should aim to provide employees with viable alternatives so that they can still have access to state-of-the-art AI tools, but only utilise them as safely and securely as possible.
IT teams should monitor outbound network traffic, looking for spikes to AI service domains and audit SSO/identity logs to flag any unsanctioned account sign-ups. Expense reports can be a useful source to uncover hidden subscriptions, while browser extension inventories can help shine a light on any subtle AI-enabled plugins that may be harvesting data.
Good governance would provide employees with all the information they need to know to identify the risks of careless AI usage and a clear definition of the acceptable and unacceptable uses of AI in their job function. Initiatives like highlighting dubious edge cases where it might not be immediately obvious whether using AI would be an issue can be a valuable conversation starter that can ultimately stop at the source gaps in knowledge that could have evolved into serious problems.
Providing sanctioned toolkits for enterprise-grade alternatives, especially if they are faster, smarter, more secure and have higher limits, can be a great incentive for users to leave behind their personal plans and work within the approved boundary. It is important to understand how tech-savvy employees are using AI in their day-to-day work and enable that rather than fighting against it completely.
How 1Password Helps Organizations Gain Visibility and Control Over Shadow AI
1Password helps organisations reclaim control over the control of their data and gain visibility over employee tool use through its Extended Access Management platform. Instead of attempting to prevent employees from using their preferred tools and encouraging them to seek workarounds, 1Password’s Extended Access Management framework provides IT teams with the ability to monitor which AI tools are in use. This way, they can direct employees towards authorized and secure alternatives.
Features like Device Trust, which can detect personal and corporate AI usage at the browser level, or Secure authentication for AI Agents mean that users never have to worry about providing persistent access to proprietary data. 1Password can help educate users who may not be aware of the severity of their actions or the AI alternatives that are already available to them while still enjoying the productivity gains they have come to love.
FAQ
What is an example of invisible AI?
Concealed machine learning features embedded within standard business suites, such as the automated junk mail sorting in Microsoft 365, act as instances that function without the user's direct interaction. These are also present as background utilities in verified enterprise tools that provide suggestions or insights without being the software's main focus.