With AI increasingly used for grid optimisation, predictive maintenance, and anomaly detection, Dario Perfettibile, General Manager, EMEA GTM & Customer Operations at Kiteworks, examines why the energy sector needs to prepare for the operational consequences now.
The energy industry faces a threat landscape unlike that of most other sectors. Nation-state actors have already demonstrated both the intent and the capability to target critical energy infrastructure. When attacks succeed, the consequences can extend beyond data compromise to physical disruption, including power outages, pipeline interruptions, and grid instability.
AI systems are increasingly being embedded in these environments for predictive maintenance, load balancing, grid optimisation, and anomaly detection. As a result, they are becoming high-value targets with potentially high-consequence operational implications. Against that backdrop, the Data Security and Compliance Risk: 2026 Forecast Report reveals that just 9% of energy organisations conduct AI red-teaming exercises. In a sector that is already a known target for sophisticated adversaries, that suggests most organisations have yet to test their AI systems against realistic attack scenarios before those systems affect live operations.
The report indicates that energy’s AI security gaps are significant and interconnected. The sector has invested heavily in compliance-oriented point controls, leading globally in dataset access controls (50% versus 35%), privacy impact assessments (41% versus 25%), and isolated training environments (36% versus 26%). However, it lags in some of the more centralised capabilities needed to detect and respond to sophisticated threats. Adoption of AI data gateway technology stands at 18%, which is 17 points below the global average. AI-specific incident playbooks exist in only 14% of energy organisations, 13 points below the norm. Encryption of AI training data sits at 27%, a 12-point deficit.
In practice, this matters because energy organisations have historically built strong controls around individual assets and systems, reflecting decades of operational experience across generation, transmission, and distribution. But AI-related threats do not necessarily conform to those boundaries. Adversaries targeting critical infrastructure are likely to look for opportunities across systems and workflows, exploiting the spaces between point controls. Without more centralised visibility across grid management, pipeline operations, and maintenance AI, organisations may struggle to identify coordinated activity before it has operational consequences.
The governance gap compounds the detection gap. Fewer than a third of energy boards give dedicated attention to AI governance. Because board attention shapes resource allocation, the investment needed to address red-teaming, monitoring, and incident response may struggle to compete with more established priorities. The result is that AI in critical infrastructure remains comparatively under-governed.
The incident response shortfall is also notable. With only 14% of energy organisations maintaining AI-specific incident playbooks, many would be developing their response in real time if an AI compromise occurred. In operational environments, that can increase dwell time and give adversaries more opportunity to study processes, identify weak points, and establish persistence. Without documented playbooks and practised response procedures, even a containable incident risks becoming a more disruptive operational event.
Training data security is another important part of the picture. Energy continues to lag on encryption of training data, which matters given the nature of the datasets involved: historical grid load patterns, demand forecasting models, equipment failure signatures, and predictive maintenance records. For a sophisticated adversary, that information could provide insight into how infrastructure operates, where weaknesses may lie, and how operators typically respond to anomalies. The sector performs reasonably on data minimisation (45%), but if training data is left unencrypted, it may still be exposed once perimeter defences are breached. Only 18% can trace training data provenance, raising further questions about the security classification and sensitivity of the data being used to train models.
For a sector facing capable and patient adversaries, limited visibility into unexpected AI behaviour can create long exposure windows. Nation-state actors may compromise systems and remain dormant for extended periods before acting. Where centralised monitoring of AI activity is limited, that compromise may not become visible until the effects move beyond the digital environment and into operations.
The immediate priority is to close these architectural blind spots. AI red-teaming needs to become a more established part of security practice in the sector. More centralised monitoring and oversight should help organisations correlate signals across distributed systems in ways that reflect the interconnected nature of critical infrastructure. AI-specific incident response playbooks should be developed around credible energy-sector threat scenarios, with tabletop exercises used to test them. At board level, AI governance needs to move closer to the same tier of attention already given to security and regulatory compliance. And where AI training data contains sensitive operational intelligence, encryption should be treated as a baseline control.
Energy organisations cannot assume that broader industry norms will mature quickly enough to close the gap for them. The sector has largely approached AI governance in the same way it has approached traditional infrastructure: through controls applied at the level of individual assets and systems. But adversaries targeting AI in critical infrastructure are likely to exploit the gaps between those controls. In that context, centralised visibility, adversarial testing, and rehearsed incident response are not simply matters of good practice; they are increasingly relevant to operational resilience.