By
Earl Shockley
By Earl Shockley, President and CEO, INPOWERD LLC
Trust • Accountability • Service
The electric industry has never had a data problem. What we have struggled with, at times, is integration, disciplined execution, and clarity of judgment across that data. Artificial Intelligence (AI) is now being positioned as the solution to these problems.
That claim deserves a careful, experience-based look.
During my 40 plus years in the industry, I can recall many different industry shifts and struggles to incorporate technology into what was considered, at the time, to be a very comfortable and proper way of running the power grid. I can clearly remember the business disruption that came about when the internet, email, personal computer systems, mobile devices etc. were introduced to everyday operations. Those who were slow to embrace these technologies often found themselves behind the eight ball.
AI is now at the forefront of the technology discussion; I recently watched a podcast where a well-positioned “regulatory expert” described AI as a “big computer” and stated that “Artificial Intelligence will be an important part of the system in the future.” He opined “With rising complexity and data volumes, the challenge is boiling it down to what truly matters.” For me this was vague and not on point with the questions and concerns in our industry. I was personally expecting a deeper conversation of how AI can deliver real value to the energy industry and the risks we should be aware of.
In this blog, I do not plan to dive into a technical rabbit hole, but I will discuss how AI with its many layers of a broader transformation, benefit as well as challenge the energy industry.
First and foremost, I understand it to be a force multiplier for engineering judgment, operational discipline, risk management, and regulatory compliance sustainability; not a replacement for them.
AI can deliver real value to the energy industry, specifically AI can add value to reliability, risk management, and operational awareness. On the other side of the coin, it can also introduce new forms of systemic risk that are poorly understood, weakly governed, and culturally misaligned with how electric reliability is achieved. Like many powerful tools introduced into high-consequence environments, AI is neither inherently good nor inherently bad; it is an amplifier. What it amplifies depends entirely on leadership, culture, and governance.
The Obvious Promise: Predictive Analytics and Asset Risk
For me, predictive analytics is the most obvious and mature AI use case, and it is a legitimate one. I was on an advisory board with a start-up company focused on this technology, and what I learned reinforced both its value and its limitations.
AI can identify patterns across condition monitoring data, operating history, maintenance records, and environmental inputs that humans simply cannot process at scale. Used properly, this capability can help organizations identify early signs of asset degradation, prioritize maintenance based on actual risk rather than fixed intervals, inform capital investment decisions, and reduce the likelihood that latent equipment issues escalate into forced outages. Areas of real value include asset health indicators, failure precursors in transformers, breakers, turbines, and protection systems, abnormal operating trends that precede outages, and risk trajectories that support more disciplined maintenance and capital prioritization.
But here is the critical distinction that often gets lost. AI does not predict failures. It identifies probabilities, trends, and changing risk profiles. That difference matters.
Reliability is not achieved by prediction. It is achieved by judgment, prioritization, and timely decisions made by accountable professionals. Predictive analytics can highlight where risk is increasing and where attention should be focused, but it cannot determine acceptable risk, select tradeoffs, or decide when intervention is warranted. Those decisions remain human responsibilities. When organizations treat predictive outputs as answers rather than inputs, they risk replacing disciplined asset management with false confidence.
Used correctly, predictive analytics strengthens asset risk awareness and supports better decisions. Used incorrectly, it erodes accountability and obscures the judgment that reliability ultimately depends on.
Operational Support - Not Operational Control
In grid operations, AI’s most appropriate role is decision support, not decision ownership. Reliability in the electric industry has always rested on clearly defined accountability, even under conditions of uncertainty, stress, and incomplete information. Operators, engineers, cybersecurity professionals, compliance leaders, and executives remain responsible for the decisions they make and the outcomes that follow.
Under the NERC Reliability Standards, qualified System Operators remain accountable for real-time decisions, regardless of what any tool suggests. AI may inform, but it cannot own, authorize, or excuse operational decisions. AI does not change that; accountability does not transfer to algorithms.
Used correctly, AI can materially improve operational awareness by reducing cognitive overload, accelerating contingency analysis, highlighting anomalies, and helping operators and engineers evaluate “what if”scenarios more efficiently.
In real-time operations, cognitive overload occurs when the volume, velocity, and complexity of information exceed an operator’s ability to process it effectively under time pressure. This is not a training issue or an intelligence issue. It is a human limitation that exists in every high-consequence control room. System Operators are required to simultaneously monitor frequency, voltage, line loadings, contingency limits, alarms, switching activities, weather impacts, generation behavior, load variability, and communications across multiple organizations. During abnormal or emergency conditions, that information increases rapidly while decision time shrinks. The risk is not that operators lack data. The risk is that critical signals become buried in noise, forcing operators to triage information rather than evaluate the system holistically.
AI can improve operational awareness by helping filter, prioritize, and contextualize information before it reaches the operator. Instead of presenting raw alarms, telemetry, and model outputs as independent data points, AI can highlight what has changed, why it matters, and how it relates to known risk conditions. For example, it can identify combinations of conditions that historically precede instability, suppress nuisance alarms, surface abnormal trends that are not yet violations of limits, and group related system behaviors into coherent patterns rather than isolated alerts that continuously roll across the alarm page.
The value is not speed for its own sake. The value is clarity. By reducing the mental effort required to interpret fragmented data, AI can help operators and engineers preserve cognitive capacity for what humans do best: judgment, prioritization, and decision making under uncertainty. It does not replace situational awareness. It helps protect it.
Planning in an Uncertain Environment
Planning used to be hard because the system was complex.Planning today is hard because the future is unstable. For most of my career, long-term planning relied on assumptions that moved slowly and predictably. Load growth was incremental. Generation portfolios evolved over decades.Customer behavior was well understood. Regulatory expectations were relatively stable. That planning environment no longer exists.
Today’s planning landscape is being reshaped by forces that are evolving faster than traditional planning cycles can absorb. Electrification (a systematic shift of energy uses that were historically served by fossil fuels as the primary energy source) is altering the shape, timing, and geographic concentration of load. Large data centers are creating step changes in demand that appear quickly, cluster in limited areas, and require extreme reliability. Electric vehicle adoption introduces a new category of demand that is highly location-specific and behavior-driven. Extreme weather is no longer just an operating condition. It is now a central planning assumption. Policy and regulatory shifts introduce discontinuity in resource mixes, development timelines, and cost recovery expectations. Distributed energy resources and customer-side behavior add uncertainty to net load, voltage control, and protection coordination.
AI can be particularly effective when addressing planning problems that involve large volumes of data, weak signals, and many interacting variables. It can integrate data sources that have traditionally been analyzed separately, including weather patterns, local economic activity, EV adoption trends, building permits, Distributed Energy Resources (DER) deployment, and customer behavior. This allows planners to develop forecasts with greater context rather than relying on narrow historical extrapolation.
This is where AI can add value when used with discipline.
· Build better forecasts with more context - AI can integrate weather patterns, local economic activity, EV registration trends, building permits, DER adoption, and customer behavior in ways traditional forecasting models struggle to do at scale.
· Evaluate many scenarios faster than human teams can - AI can run thousands of combinations of assumptions, sensitivities, and constraints quickly, allowing planners to explore extremes, not just “the base case.”
· Stress-test plans against emerging risks - AI can help identify brittle planning assumptions by testing what happens if multiple conditions occur at once, for example, extreme heat, delayed transmission, and rapid load growth in the same region.
· Identify hidden correlations - Some risks are not obvious until you analyze datasets that have never been connected before. AI can reveal leading indicators and correlations that a human team would not naturally seek out.
· Support probabilistic planning - Planning is shifting from deterministic forecasts to probability distributions and risk bands. AI can help quantify uncertainty and support risk-informed decision-making.
As planning shifts from deterministic forecasts toward probability distributions and risk bands, AI can help quantify uncertainty and support risk-informed decision making. It can help planners understand not just what might happen, but how likely different outcomes are and where plans become fragile.
What AI cannot do is replace engineering judgment, ethical decision making, or leadership accountability. Scenario evaluation is not strategy. Optimization without context is not planning. AI can illuminate options and expose risk, but people remain responsible for deciding which risks to accept, which to mitigate, and which paths align with reliability obligations and public trust.
The Underestimated Opportunity: Risk and Compliance Visibility
One of the most under appreciated opportunities for AI is its ability to improve visibility into risk and compliance in ways that have historically been difficult, resource intensive, or reactive. For decades, most organizations have managed compliance and operational risk through periodic reviews, self-certifications, internal audits, and event-driven assessments.Those tools are necessary, but they are inherently episodic. They provide snapshots in time, often after risk has already materialized or controls have already degraded.
AI, when governed correctly, offers the potential to shift organizations from episodic visibility to continuous situational awareness. From a former regulator’s perspective, this is where AI can deliver real value. When applied strategically, AI can analyze large volumes of operational, compliance, and organizational data simultaneously and identify patterns that are difficult for humans to detect at scale. Weak signals often exist long before a violation is cited or an event occurs. They are simply buried in disconnected data sets that is rarely examined holistically.
These signals may include recurring procedural deviations across regulatory shifts, gradual erosion of internal controls, increasing reliance on workarounds due to a lack of resources, delayed corrective actions or lesson application, or growing gaps between documented processes and actual execution. Individually, these conditions may appear minor. Collectively, they are risk clusters and often precursors to enforcement exposure or high-severity events. These signals point to a convergence of human drift and latent organizational risks that considerably reduce our margin for error.
AI can help organizations identify patterns of human and procedural drift, detect degrading internal controls before they fail, flagging areas where compliance narratives no longer align with actual execution, and prioritizing audit readiness based on real exposure rather than calendar-driven preparation.This matters because regulators rarely focus on isolated failures. They focus on patterns. During audits, investigations, and enforcement reviews, the central question is whether management understood the risk, maintained control, and acted in a timely manner. AI can help organizations see internally what regulators are trained to identify externally.
When governed well, AI strengthens management-in-control by providing leadership with earlier, more accurate insight into where drift and risk is accumulating. It can support risk-informed decisions about where to invest attention, training, and corrective action.
AI does not change who owns compliance and operational risk. Leadership remains responsible for interpretation, judgment, and action. Regulators will still ask the same questions. Did management know? Did they act? Was risk assessed? Were controls effective?
AI can help organizations answer those questions positively, but only if it is treated as part of the internal control environment, not a reporting convenience.
In my opinion, the most significant risks associated with AI are not technical. They are cultural and governance-related.
The Risk of False Precision and Deference
One of the most insidious cultural risks AI introduces is the facade of false precision and a natural deference to the technology. AI out puts often appear objective, precise, and authoritative even when they are built on incomplete, biased, or poorly understood data.
In high-reliability industries, deference is dangerous. Decisions that cannot be explained, reconstructed, and defended are liabilities. Black-box models create exposure, not protection. When leaders stop asking questions because “the model said so,” risk begins to accumulate quietly. Overtime, people stop challenging the output. The organization becomes complacent, less curious, less innovated, and less resilient.
Leaders must actively reinforce that AI outputs are hypotheses, not answers. The obligation to think, question, and decide does not go away because the math looks convincing. Executives should ask a simple question about any AI tool influencing decisions. Can we explain this decision clearly, in plain language, after something goes wrong? If the answer is no, the organization is accepting hidden risk.
Explainability is not a technical preference. It is a governance requirement. When responsibility shifts from people to tools, ownership and accountability erodes. After a disturbance, an outage, or a compliance failure, “the algorithm decided” is not an explanation that regulators, boards, or customers will accept. Nor should they.
Leadership owns that risk, whether they realize it or not.
Cyber and Supply Chain Risk
AI expands the attack surface in ways the energy industry cannot afford to underestimate. These systems depend on data integrity, model behavior, continuous updates, and often third-party tools that are not fully transparent. That creates new failure modes beyond traditional IT and OT cyber risks because an attacker does not always need to “take control” of a system to cause harm. They can poison the data, manipulate inputs, or subtly degrade model performance over time, leading the organization toward bad decisions while the tool still appears to be functioning normally.
The risk becomes more serious when AI outputs are trusted to drive operational priorities, anomaly detection, forecasting, or automated responses. A compromised AI system can influence decisions invisibly and at machine speed, creating false confidence, masking real issues, or accelerating harmful actions faster than humans can interpret what is happening. In reliability operations, speed is not always the goal. Correctness is. An incorrect action taken quickly can be far more damaging than a correct action taken slowly, especially when errors propagate consistently across fleets, regions, or interconnected systems.
The irony is that tools introduced to improve resilience can accelerate failure if they are compromised. That is why leaders must treat AI as part of the reliability control environment, not just a software feature. Before trusting AI in any reliability-relevant function, organizations need strong data integrity controls, model governance and change control, explainability and audibility, disciplined vendor accountability, and clear human-in-the-loop guardrails. AI can be a force multiplier, but only if leadership insists on governance and accountability that keep people in control.
Culture Is the Control That Matters Most
The most important internal control in any reliability organization is culture. A strong reliability culture is characterized by a questioning attitude, a willingness to challenge outputs and assumptions, respect for uncertainty and low-probability, high-impact risk, and clear ownership when decisions are made. AI can either reinforce these behaviors or quietly undermine them. When employees are rewarded for speed over discipline, AI becomes a shortcut. When leaders value dashboards over dialogue, AI becomes a shield. When questioning AI outputs is discouraged, culture degrades.Technology does not create culture. Leadership behavior does, and AI will expose the culture an organization already has.
AI does not fix weak culture, poor discipline, bad processes, or unclear accountability. It exposes and amplifies them. Without alignment between culture and the way AI outputs are interpreted and acted upon, conflict is inevitable. In organizations with strong leadership and trust-based cultures, AI can improve insight, consistency, and decision quality. In organizations with weak controls or avoidance of hard conversations, AI accelerates human and procedural drift and reinforces denial. In that sense, AI is not just a tool. It is a mirror that reflects the organization’s true operating culture under pressure.
I have written before that culture is the hardest mountain to move in any organization. AI does not move that mountain. It climbs it faster, for better or worse. Where culture is strong, AI reinforces discipline. Where culture is weak, it accelerates drift.
This is why AI adoption is a leadership test, not an IT project.
AI will shape the future of the energy industry. How it shapes accountability and culture is a leadership choice. Reliability has always been a leadership responsibility. That has not changed.
Leaders should govern AI as a high-risk reliability capability by assigning clear ownership, validating models and data, and requiring explainable, traceable decisions with disciplined change control. AI should inform decisions, not own them, with defined human review and override guardrails.
Leaders must also protect reliability culture by reinforcing a questioning attitude and disciplined judgment. Accountability cannot be delegated to algorithms, and teams must be expected and empowered to challenge AI outputs rather than defer to them.
Used well, AI can strengthen resilience, improve risk visibility, and support better decisions.
Used poorly, it introduces a fast-moving, systemic risk that will not be forgiven by regulators, customers, or the grid. The industry does not need smarter tools nearly as much as it needs disciplined leaders who understand how and when to use them.
The leaders who understand this will use AI as a force multiplier.
Do you have questions regarding your organization, compliance, risk, strategy or operations? Get your questions answered.
Schedule a call