-Content by CyberNewswire-
The 2025 State of AI Data Security Report reveals a widening contradiction in enterprise security: AI adoption is nearly universal, yet oversight remains limited. Eighty-three percent of organisations already use AI in daily operations, but only 13 percent say they have strong visibility into how these systems handle sensitive data.
Produced by Cybersecurity Insiders with research support from Cyera Research Labs, the study reflects responses from 921 cybersecurity and IT professionals across industries and organisation sizes.
The data shows AI increasingly behaving as an ungoverned identity, a non-human user that reads faster, accesses more and operates continuously. Yet most organisations still use human-centric identity models that break down at machine speed. As a result, two-thirds have caught AI tools over-accessing sensitive information, and 23 percent admit they have no controls for prompts or outputs.
Autonomous AI agents stand out as the most exposed frontier. Seventy-six percent of respondents say these agents are the hardest systems to secure, while 57 percent lack the ability to block risky AI actions in real time. Visibility remains thin: nearly half report no visibility into AI usage and another third say they have only minimal insight leaving most enterprises unsure where AI is operating or what data it touches.
More from Artificial Intelligence
- How Is AI Being Used In Dentistry?
- Anthropic Accidentally Leaked Its Own Source Code, And The Internet Made 8,000 Copies Before Anyone Noticed
- Oracle Shrinks To Scale. Is This A Strategic Reset Or A Frantic Scramble To Stay In The AI Race?
- Harvey Just Hit An $11 Billion Valuation Without Building A Single AI Model, Here Is What That Means For Startups
- AI Is Now Sitting In On Your Therapy Session, We Should Probably Talk About That
- Are Oral Exams The Solution To AI Cheating? Education Leaders Weigh In
- Google Just Made It Easy To Leave ChatGPT. The AI Wars Are No Longer About Who Has The Best Model
- No More Dirty Talk: ChatGPT’s “Adult Mode” Suspended “Indefinitely” Over OpenAI’s Age Prediction Inaccuracy
Governance structures lag behind adoption as well. Only 7 percent of organisations have a dedicated AI governance team, and just 11 percent feel prepared to meet emerging regulatory requirements, underscoring how quickly readiness gaps are widening.
The report calls for a shift toward data-centric AI oversight with continuous discovery of AI use, real-time monitoring of prompts and outputs, and identity policies that treat AI as a distinct actor with narrowly scoped access driven by data sensitivity.
“AI is no longer just another tool, it’s acting as a new identity inside the enterprise, one that never sleeps and often ignores boundaries,” said Holger Schulze with Cybersecurity Insiders. “Without visibility and robust governance, enterprises will keep finding their data in places it was never meant to be.”
As the report cautions: “You cannot secure an AI agent you do not identify, and you cannot govern what you cannot see.”
-This is a paid press release published via CyberNewswire-