Site icon TechRound

84% AI Web Tools Show Breaches And Encryption Gaps, Says BDI

Two weeks ago thousands of ChatGPT conversations appeared in Google search. These were real chats with prompts such as how to manage depression without medication or how to draft a resignation letter. Perplexity.ai reported that Google had indexed these shared ChatGPT links. The pages came from OpenAI’s own sharing feature.

Scalevise explained how it happened. A user clicked Share in ChatGPT. The link was public, carried no access controls, and was not blocked from search crawlers. Google indexed it like any other page. There were no authentication walls or expiry dates.

OpenAI retired the sharing feature and worked with search engines to remove the indexed content. Source 1 reports that this closed the immediate hole. The privacy lesson runs deeper than one feature.

 

What Do Security Audits Say About AI Providers?

 

The Business Digital Index team examined 10 leading large language model providers. Half of the group received an A grade while the rest scored lower. OpenAI received a D and Inflection AI received an F. BDI also found that half of these providers had documented breaches.

Every provider in the BDI sample had SSL or TLS weaknesses. Most had hosting infrastructure issues. Only AI21 Labs and Anthropic avoided major problems in that area, according to BDI.

Password safety was weak in places. BDI found credential reuse at Perplexity AI and EleutherAI. It measured 35% of Perplexity AI staff and 33% at EleutherAI using passwords seen in past breaches.

BDI then looked at 52 popular AI web tools. It found that 84% had experienced at least one data breach. It recorded 51% with stolen corporate credentials, 93% with SSL or TLS misconfigurations, and 91% with hosting weaknesses tied to weak cloud security or outdated servers.

 

 

Why Are Workplace Tools At Higher Risk?

 

Within that sample, BDI said productivity platforms were the least secure. These are note taking apps, schedulers and content tools used every day. BDI reported that every productivity tool it assessed had hosting and encryption flaws.

Adoption in offices is high, as BDI found that around 75% of employees use AI for work tasks while only 14% of organisations have formal AI policies. Nearly half of sensitive prompts go in through personal accounts, which dodges company oversight.

This secrecy makes matters worse, with BDI reporting that a big share of users hide their AI use from managers. That creates blind spots where risky prompts can pass outside normal checks.

Žilvinas Girėnas, head of product at nexos.ai, says, “This isn’t just about one tool slipping through. Adoption is outpacing governance, and that’s creating a freeway for breaches to escalate. Without enterprise-wide visibility, your security team can’t lock down access, trace prompt histories, or enforce guardrails.

“It’s like handing the keys to the kingdom to every team, freelancer, and experiment. A tool might seem harmless until you discover it’s leaking customer PII or confidential strategy flows. We’re not just talking theory — studies show 96% of organisations see AI agents as security threats, while barely half can say they have full visibility into agent behaviours.”

Many breaches go unreported or unnoticed even when they cause more harm than a visible index of chats. Without firm action, the next leak could be broader, deeper, and harder to contain.

Exit mobile version