Scaling Edge AI in Energy: From First Deployment to Production

During the “Edge AI: Driving Smarter Machine Health Monitoring for Energy Infrastructure” panel at IIoT World Energy Day 2026, poll results showed that most energy operators are just beginning to explore AI for asset monitoring. Some have completed a first deployment, but only a small percentage have scaled these solutions across multiple assets or sites. Three panelists from HiveMQ, TDK SensEI, and PrivacyChain examined the data foundations, scaling barriers, and ROI frameworks that determine whether edge AI stays a single use case or becomes an enterprise capability.

Why Is Scaling Edge AI Across Energy Sites So Difficult?

Site heterogeneity is the primary barrier to scaling edge AI in energy, because substations, wind farms, and refineries run different equipment, protocols, and data formats that prevent one AI model from working across an entire organization without a consistent data structure.

Magnus McCune, CTO at HiveMQ, pointed out that a substation in Texas and a wind farm in the North Sea use entirely different equipment, protocols, and data shapes. This makes it difficult to apply a single AI model across an entire organization without a consistent data structure. To move beyond the first deployment, companies must transition from solving isolated problems to building a foundation that accounts for these regional and technical differences.

What Are the Four Stages of AI Autonomy in Energy Operations?

Edge AI autonomy in energy operations moves through four stages, from description (reporting what is happening) through diagnostic and prescription to full autonomous action, and understanding where an organization sits on this spectrum determines how fast it can scale.

Magnus McCune described this spectrum as a framework operators must navigate to build trust:

  • Description: Reporting what is currently happening.
  • Diagnostic: Identifying why an event is occurring.
  • Prescription: Recommending a specific course of action.
  • Action: Allowing the system to execute the decision autonomously.

Many operators are comfortable with AI providing diagnostics or prescriptions but remain hesitant to allow full autonomous action on critical infrastructure. Identifying use cases where human judgment is currently used for repetitive tasks can help organizations find the right starting point for automation.

How Does Distributed Data Management Support Edge AI Scaling?

Distributed data management supports edge AI scaling by capturing data at the point of generation and routing it to the right system, whether cloud or edge, based on rules tied to each use case.

Andrew Hopkins, President at PrivacyChain, described his organization as a distributed data management system designed to function behind the sensor. By capturing data at the point of generation, the system uses specific rules to determine if information needs to go to the cloud or stay at the edge.

This approach addresses the problem of interoperability. Because different systems often use proprietary formats, a distributed layer can store data in a completely interoperable state and make it available to multiple backend applications. This eliminates the need for one-to-one connectors between every piece of equipment and every piece of software.

Why Does Asset Health Determine Edge AI ROI?

Asset health determines edge AI ROI because unplanned downtime on critical infrastructure stops production and reduces revenue, making uptime a metric that is visible at the CEO level.

Sundeep Ahluwalia, Chief Product Officer at TDK SensEI, emphasized that asset health correlates to company revenue. When critical infrastructure fails, production stops, making uptime a metric that is visible at the CEO level. By framing edge AI as a tool for revenue protection, technical teams can align their projects with corporate goals. The panel agreed that companies should use the return on investment from a single, successful use case to fund the broader data foundations required for long-term scaling.

Key Factors for Scaling Edge AI in Energy

Factor Panelist Key Insight
Site Heterogeneity Magnus McCune, HiveMQ Different equipment, protocols, and data shapes across sites require a consistent data structure before any model can scale
Data Interoperability Andrew Hopkins, PrivacyChain A distributed data layer stores data in an interoperable state, eliminating one-to-one connectors
Operational Trust Magnus McCune, HiveMQ Operators will not act on AI recommendations they cannot trace back to the underlying data
Revenue-Linked ROI Sundeep Ahluwalia, TDK SensEI Linking asset health to revenue makes the business case visible at the executive level
Change Management Panel consensus Involve impacted operators in the process, find internal champions, and secure executive buy-in
Security and Data Integrity Andrew Hopkins, PrivacyChain Data poisoning is an emerging risk; security must include validating the integrity of incoming data

Sponsored by TDK SensEI

This article is based on the “Edge AI: Driving Smarter Machine Health Monitoring for Energy Infrastructure” panel discussion at IIoT World Energy Day 2026. Sponsored by TDK SensEI. AI tools were used to help summarize and organize the content. Reviewed and edited by the IIoT World editorial team.

Panelists:

  • Sundeep Ahluwalia, Chief Product Officer, TDK SensEI
  • Andrew Hopkins, President, PrivacyChain
  • Magnus McCune, Chief Technology Officer, HiveMQ

Moderated by Luciano Narcisi, Director of Research, ARC Advisory Group.

Browse all IIoT World virtual events.


Frequently Asked Questions

What is edge AI for predictive maintenance in energy infrastructure?

Edge AI for predictive maintenance runs machine learning models on sensors or edge devices at the point of data generation, analyzing vibration, temperature, and pressure readings to detect equipment failures before they occur. TDK SensEI runs its AI model on battery-powered sensors at the asset, transmitting only the inference result to reduce cloud dependency and energy consumption. The goal is to give maintenance managers notice ranging from days to a month before a failure occurs.

How do energy companies build the data foundation for edge AI?

Energy companies need three data foundations before deploying edge AI: connectivity to get data out of legacy control systems and onto the network, contextualization so every data point carries metadata about the asset, site, and normal operating range, and reliability to guarantee data delivery whether the network is online or offline. Magnus McCune of HiveMQ noted that only 30 to 50 percent of data on legacy energy sites is accessible beyond local control systems.

What are the main challenges when scaling industrial AI across multiple sites?

The main challenges are site heterogeneity (different equipment, protocols, and data shapes across locations), lack of operational trust (operators will not act on AI recommendations they cannot trace back to underlying data), and the ROI problem (companies that attempt to address every problem at once lose focus). The panel recommended using the return on investment from a single successful use case to fund broader data foundations for long-term scaling.

How does MQTT support edge AI deployments in energy?

MQTT supports edge AI deployments by providing a lightweight streaming protocol that reduces the security surface to a single firewall port with end-to-end encryption and certificate-based authentication. HiveMQ uses MQTT to create a real-time data layer that sits across existing historians, SCADA systems, and maintenance systems without requiring replacements, making operational data available the moment it is generated with context attached.