A Chat With Michael J Bannach, Founder & President, Stealth Technology Group On How Employees Leak Company Secrets Into Chatbots – And What Safe, Approved AI Should Look Like

What are the most common ways employees accidentally leak sensitive company data into AI tools?

 

It’s very rarely down to malicious behaviour and is most often just convenience. Employees copy and paste content into public AI tools to save time, often without realising how much sensitive context is included, whether it’s client emails, contracts, meeting notes etc. Another risk is the use of browser-based AI extensions that silently interact with whatever is on a screen or being typed. The risk is that data is shared outside company-controlled systems without any meaningful oversight or audit trail.

 

Why do so many employees assume it’s safe to paste work information into chatbots?

 

There’s a widespread misunderstanding of how generative AI tools actually work. Many people still see them as neutral productivity tools similar to search engines or grammar checkers rather than external systems that process and may store or learn from inputs depending on configuration and policy. There is also an assumption that there’s no risk because no human sees it, which isn’t accurate. Even when data isn’t used to train the AI model, it can still be stored in system logs or account histories, and it may be exposed if tools are misconfigured, permissions are too open, or users accidentally share or reuse that information in the wrong place.

 

What types of company information are most at risk of being exposed through generative AI?

 

Any highly structured or copy-paste friendly information is most at risk. This includes contracts, pricing tables, customer records, financial data, internal strategy documents, HR records, and unreleased product information. Code repositories are also a major exposure point because developers often use AI to debug or refactor code, which can inadvertently include proprietary logic, API keys, or system architecture details. Anything that combines identifiers, numbers, and business context tends to be particularly sensitive.

 

 

How big of a problem is the use of personal AI accounts or browser extensions for work tasks?

 

It is a significant and often underestimated governance gap. When employees use personal AI accounts, companies lose visibility and control over how data is stored and shared. Browser extensions can compound this risk by capturing page content, keystrokes, or prompts in real time. In regulated or high-risk industries, this creates compliance challenges because sensitive data may be processed outside approved environments.

 

What does “safe, approved AI” actually look like in practice for a company?

 

AI tools should only be deployed within a controlled enterprise environment with single sign-on, role-based access controls, data encryption, audit logs, and clear retention policies. It also means restricting public AI tools on managed devices or ensuring that any usage is sandboxed and monitored. Importantly, safe AI is not just a technical deployment but a policy framework that defines what data can and cannot be entered, along with ongoing employee training and enforcement.

 

Where are businesses getting it wrong when trying to set AI policies for staff?

 

Many organisations focus too heavily on restriction without offering practical alternatives. If employees are blocked from using AI tools but are not given a secure approved option, they’re far more likely to bypass these controls for productivity reasons. Businesses also tend to write their policies in vague legal language that their employees don’t fully understand in practice. An effective policy will be specific, scenario-based and integrated into daily workflows.

 

What simple employees do today to reduce the risk of exposing sensitive data?

 

Always avoid pasting raw confidential information into public AI tools and instead use anonymised or summarised versions of content. Separate personal and work accounts, avoid installing unapproved AI extensions on work browsers, and check whether your organisation provides an approved AI platform before using any external services. Any AI-generated output should be reviewed carefully before being shared externally, particularly for unintended inclusion of sensitive context or inaccuracies.