By Remi Ramcharan, Vice President, Senkron Digital
“Should we use AI for this?”, “AI can fix that, right?”, “We’ll get AI to figure this out, won’t we?” These are questions I hear almost every week in relation to all manner of things in the energy and critical industry sectors.
These questions are usually asked with genuine enthusiasm for AI, but reveal that the use of this technology is often misunderstood. They show that it is still being perceived as a silver bullet, or perhaps more problematically, a ‘fit-for-every-purpose’ bolt on that can be plugged, played and left to its own devices.
I don’t say this to criticise, as its understandable given the rate AI has developed, but it’s at odds with the reality, that really, AI use is only effective when integrated into the core of a relevant system (or systems) and managed deliberately.
Looking towards 2026, this distinction is more important than ever. And while relevant globally, I see it really coming to the forefront in the Middle East. AI and emerging technology will be fundamental to how critical industries that are vital to economies across the Gulf meet rising global demand, from smarter grids and predictive maintenance to optimised production in oil and gas.
While the AI debate is largely settled (it’s here to stay), the challenge is to shift mindsets from applying AI as a quick reaction to fix or optimise a single set of processes, to something that is truly embedded into operations with trust and readiness among users. And perhaps most importantly, governed effectively.
This trust, readiness and governance is so important because while AI and emerging tech represent extraordinary opportunity for energy and industrial operators, they also dramatically expand the attack surface. Every new connection, data pathway or automated decision creates additional entry points for malicious actors; and when you consider that the UAE alone faces 200,000 cyberattacks daily, it puts the scale of risk into perspective.
Now it’s worth noting, the irony isn’t lost on me that the very technologies that offer the greatest potential for improvement also introduce the greatest exposure. As a result, innovation and cyber resilience in the Middle East have to grow together. Innovation without resilience is fragile while resilience without innovation leaves firms struggling to keep pace with competitors.
And this is why, from my perspective, 2026 is the year that cyber resilience will become the new leadership imperative. For decades, leadership excellence in energy and critical industries has been measured by uptime, productivity, margins and safety. All of these measures still of course matter, but cyber resilience is now the single capability that underpins the success of them all.
More from Cybersecurity
- UK Government Launches New Cyber Unit
- Expert Predictions For Cybersecurity In 2026
- INE Security Expands Across Middle East And Asia To Accelerate Cybersecurity Upskilling
- Link11 Identifies Five Cybersecurity Trends Set To Shape European Defence Strategies In 2026
- IT Support Roles On The Rise In London
- SpyCloud Data Shows Corporate Users 3x More Likely To Be Targeted By Phishing Than By Malware
- INE Expands Cross-Skilling Innovations
- Seraphic Becomes The First And Only Secure Enterprise Browser Solution to Protect Electron-Based Applications
In these sectors, particularly given the Middle East’s role in global energy and materials supply, disruption goes beyond data breaches; it can halt production, destabilise entire value chains, and impact the essential resources societies rely on every day. As a result, how AI is being governed, integrated and trusted inside operational environments is crucially important. Without this, its speed and intelligence can just as easily amplify risk.
Let’s start with integrating AI effectively. A useful way to think about this is picturing modern energy and industrial operations as a human body; hugely complex, ever-changing and finely tuned. IT and OT systems now function a bit like interconnected organs; each with distinct roles, but dependent on, and connected to, one another.
AI (in context of this analogy) shouldn’t be treated like an X-Ray or an ECG, something you switch on where there’s a problem to be investigated. It should function more like the nervous system, continuously sensing, learning and responding, guided by human judgement and context.
If this is when AI is most effective; cybersecurity must operate this way too. Rather than siloed efforts, it should provide integrated protection of the intelligence that AI provides across expanded attack surface areas that traditional security models aren’t equipped for.
For example, treating OT as a simple extension of IT is one of the fastest ways to undermine resilience. IT systems prioritise confidentiality and data integrity while OT systems prioritise availability and safety.
AI models trained without respect for that distinction can misinterpret risk, generate unneeded noise, or worse, trigger responses operators cannot safely execute. This is why frameworks such as IEC 62443, NIST, and emerging regional regulations matter, giving AI the operational language it needs to be effective.
Trust then becomes the deciding factor. Operators don’t tend to put total trust in black boxes of data, especially when decisions carry such significant consequences.
AI systems that can’t explain why a risk was flagged or how a recommendation was created run the risk of being sidelined in the moments that matter most. Explainability is a key requirement for operational use, because in my experience, people use what they trust, and trust what they understand.
Governance and readiness is the final element, and something I often see underestimated. If an organisation’s first real cyber resilience test is a live attack, it can be too late to make important optimisations.
Proper resilience is built through repeated operational cyber drills that simulate real-world disruption. These exercises expose everything from integration gaps and unclear decision pathways to delays between digital insights and human decision-making. While AI can create vulnerability points, when used effectively, it can strengthen them too, but only when it’s treated as a living, tested component of the operation, not a bolt-on layer.
Taking all of this into account, it’s clear to me that the most successful energy and industrial leaders will be those that embed AI and cybersecurity as an operational discipline that safeguards productivity, margins and safety; governed, trusted and tested under pressure.
Encouragingly, nearly half of business leaders in the Middle East identified improving their organisation’s risk posture and following a cyber roadmap as a top investment priority this year. Those that take grip of the task ahead turn fear of worry into resilience. Without doing this, intelligence doesn’t reduce risk, it accelerates it.