At an IIoT World Energy Day 2026 panel on scalable edge solutions for modern grids, Andrew Foster of IOTech Systems, Cody Falcon of ABB, Brenna Wood of EDF Power Solutions North America, and Janko Isidorovic of Fluence Energy discussed what distributed energy resource fleets require before edge AI can produce useful results. The panel, moderated by Hamish Mackenzie and sponsored by IOTech Systems, identified three foundational data layers that must be in place, a fourth orchestration layer that determines whether any of it scales, and a cautionary example of what happens when organizations spend years collecting data before defining the problem they need to solve.
1. What Three Data Foundations Does a DER Fleet Need for Edge AI?
A DER fleet needs three foundational layers before edge AI produces useful results: normalized time series data with accurate timestamps, consistent semantic tagging across equipment manufacturers, and compute at the edge where latency-sensitive decisions are made.
The first requirement is high-quality, normalized time series data with accurate timestamps. Andrew Foster described real deployments where timestamp accuracy problems and the challenge of coordinating timestamps across distributed nodes at the edge caused issues in control systems. Getting clocks synchronized so that all data carries accurate timing is a practical issue that teams discover in the field, sometimes the hard way.
The second is consistent semantic tagging, meaning the same measurement reads the same way regardless of which manufacturer’s equipment produced it. Without this layer, data from different vendors arrives in different formats, and AI models cannot interpret it consistently.
The third is compute at the edge to run the models. Training can happen in the cloud, but inference needs to execute locally, where the latency-sensitive decisions are made. An AI-ready DER infrastructure requires all three layers working together before algorithms can deliver reliable output.
2. Why Should Operators Start with Specific Problems Before Building Data Infrastructure?
Operators should start with specific problems because organizations that collect data first risk spending years and significant capital on infrastructure that fails to address their most pressing operational challenges.
Cody Falcon of ABB described a customer that spent five years building a hundred-million-dollar enterprise data lake, collecting every available data point across the organization. When ABB sat down with the operations team and asked what their primary problem was, it turned out to be heat exchanger fouling at offshore facilities. The heat exchangers had 28 available tags. To predict fouling, ABB needed to look at five.
“You probably don’t need a hundred million dollar, 400,000-tag real-time system in place to be able to fix your number one problem,” Cody Falcon said.
The recommended approach: identify the top three or top 10 operational problems, determine exactly what data each problem requires, and build from there. Over years spent solving real problems and delivering measurable value, the data infrastructure fills out naturally. The starting point determines whether value arrives in months or in years.
Brenna Wood of EDF Power Solutions North America described a parallel approach. EDF started with a business capability mapping exercise to understand current operational needs and how the organization performed against those needs. The questions were practical: what information does each sensor provide, what will the team learn from it, and what action follows?
3. How Does Fleet-Scale Orchestration Determine Whether Edge AI Scales?
Fleet-scale orchestration determines whether edge AI scales because without the ability to deploy models, push updates, and roll back changes across an entire fleet from a central point, distributed edge systems become unmanageable site by site.
Foster identified orchestration as a fourth layer sitting on top of the three data foundations. “You need orchestration on top of these systems to allow you to push updates into the system at fleet scale,” Foster said. “Otherwise the systems become unmanageable.”
That means deploying AI models, pushing updates, monitoring performance, and rolling back when something goes wrong, all from a central point across the entire fleet. Adding a new data tag, updating a model, or configuring a new asset type cannot be done one site at a time at the scale of a modern DER fleet.
Brenna Wood described the same dynamic from the operator’s perspective. As EDF unified its data pipeline from edge through cloud to analytic environments, the data transformation chain grew more complex. Questions of data governance, ownership along the chain, and how to support new requests from the business became part of daily operations.
4. Why Does Data Reduction at the Edge Matter for Grid-Scale Operations?
Data reduction at the edge matters because at modern grid scale, operators cannot move every data point across the network, and in most cases only the data points that are actually changing need to travel.
IOTech builds data reduction strategies directly into its edge platform, including filtering, compression, and unchanged semantics. At the scale of modern grid operations, where individual battery sites can have thousands of containers generating cell-level readings every few seconds, these reduction strategies are what keep the system functional.
Data reduction deserves the same engineering attention as the analytics running on top. The volume of raw data at grid scale means that without filtering and compression at the edge, the network itself becomes the bottleneck, regardless of how capable the AI models are.
Data-First vs. Problem-First Approach to DER Infrastructure
| Dimension | Data-First Approach | Problem-First Approach |
| Starting point | Collect every available data point | Identify top 3-10 operational problems |
| Infrastructure cost | $100M enterprise data lake, 400,000 tags | Targeted data for specific problems |
| Time to value | 5+ years before addressing primary problem | Measurable value from solving first problem |
| Data scope | All available tags across the organization | Only the tags needed (e.g., 5 of 28 for heat exchanger fouling) |
| Infrastructure growth | Massive upfront investment | Fills out naturally as real problems are solved |
This article is based on the IIoT World Energy Day 2026 panel “Turning Industrial Data into Energy Insight: Scalable Edge Solutions for Modern Grids,” featuring Andrew Foster (IOTech Systems), Cody Falcon (ABB), Brenna Wood (EDF Power Solutions North America), and Janko Isidorovic (Fluence Energy), with moderation by Hamish Mackenzie (IIoT World). AI tools were used to help summarize and organize the content. Reviewed and edited by the IIoT World editorial team.
Sponsored by IOTech Systems.
Watch the full panel discussion
Frequently Asked Questions
1. What does AI-ready mean for a distributed energy fleet?
AI-ready for a distributed energy fleet means having three foundational layers in place: normalized time series data with accurate timestamps, consistent semantic tagging so the same measurement reads the same way regardless of equipment manufacturer, and compute at the edge where latency-sensitive inference decisions are made. Without these foundations, data feeding AI models is too inconsistent to produce useful results.
2. Why is normalized time series data the first requirement for edge AI?
Normalized time series data with accurate timestamps is the first requirement because timestamp accuracy problems and the challenge of coordinating timestamps across distributed edge nodes cause issues in control systems. Getting clocks synchronized across all nodes so that data carries accurate timing is a practical challenge that teams discover during real deployments.
3. What went wrong with the hundred-million-dollar data lake approach?
A customer spent five years and approximately one hundred million dollars building an enterprise data lake with 400,000 tags, collecting every available data point. When asked about their primary operational problem, it was heat exchanger fouling at offshore facilities, which required only 5 of 28 available tags to predict. The data infrastructure far exceeded what was needed to solve the most pressing challenge.
4. What is fleet-scale orchestration in distributed energy?
Fleet-scale orchestration is the ability to deploy AI models, push updates, monitor performance, and roll back changes across an entire distributed energy fleet from a central point. Without it, operations like adding data tags, updating models, or configuring new asset types must be done site by site, which becomes unmanageable at modern DER fleet scale.
5. How does edge data reduction keep grid-scale systems functional?
Edge data reduction uses filtering, compression, and unchanged semantics to reduce the volume of data that moves across the network. At grid scale, where individual battery sites can have thousands of containers generating cell-level readings every few seconds, these strategies keep systems functional by transmitting only the data points that are actually changing.
6. How is AI being used in energy operations?
AI in energy operations requires three foundational layers before it produces useful results: normalized time series data with accurate timestamps, consistent semantic tagging across equipment manufacturers, and compute at the edge where latency-sensitive decisions are made. Fleet-scale orchestration then allows operators to deploy models, push updates, and roll back changes across an entire distributed energy fleet from a central point, while data reduction at the edge keeps the system functional at grid scale.