A temperature reading with a timestamp does not tell you whether it is Fahrenheit or centigrade, which factory generated it, which production line, or which asset. Without that context, the data cannot drive a decision. At Hannover Messe 2026, InfluxData and Litmus described how their integrated architecture solves this: Litmus Edge connects to shop floor data sources and normalizes incoming streams, while InfluxDB 3 Enterprise stores contextualized data as the system of record, handling the high-cardinality metadata that industrial AI demands.
Why Does Data Context Determine What Edge AI Can Do?
Context transforms raw sensor readings into information that manufacturing teams can act on, and without it, edge AI models cannot produce reliable decisions at production speed. A temperature value paired only with a date-time stamp leaves critical questions unanswered. Add the factory, the production line, the specific asset, the work order from the ERP or MES system, the customer, and the product, and that same data point becomes the basis for a decision. Without those layers of meaning, a single data point is not meaningful enough to act on.
At the edge, the stakes are higher. Latency constraints are tighter, and the decisions affect physical processes in real time. If the data arriving at the model lacks context, the correction window on the shop floor is too short to compensate.
Which Manufacturing Use Cases Move to the Edge First?
Three factors determine which use cases migrate to the edge ahead of others: latency, security, and cost. Use cases where near real-time response matters are moving fastest, with teams reaching positive ROI within 60 to 90 days of deployment.
Scrap and quality monitoring is among the first. Production teams need near real-time decisions about whether a part is sellable or waste, and the round trip to the cloud introduces too much delay for that judgment. Recipe optimization in process manufacturing follows the same logic: when a batch of chemicals, beer, or any other substance with a defined recipe drifts off target, the window to correct is short. A late response means losing the entire batch.
What has shifted is where the analytical capability sits. Shop floor teams historically depended on a central AI group for any advanced analysis. With AI running at the edge, production teams gain direct access to insights they have not had before. The full scope of what becomes possible is still emerging; manufacturers themselves will define which use cases deliver the most value once the capability is available on the production floor.
| Capability | Detail |
| Edge platform | Litmus Edge (connectivity, normalization) |
| Time series database | InfluxDB 3 Enterprise (system of record) |
| Key edge use cases | Scrap/quality monitoring, recipe optimization |
| Edge factors | Latency, security, cost |
| Data pathways | Edge processing + cloud for long-term modeling |
| Context metadata | Factory, line, asset, work order, customer, product |
| High cardinality | Contextual metadata captured alongside sensor values |
| Historian positioning | Augmentation, not replacement |
| Deployment approach | 3 sites minimum, 1-2 use cases |
| Time to ROI | 60-90 days to positive ROI |
| Scale path | 3 sites to 30 or 300 without re-architecture |
| Greenfield sector | Renewable energy |
| Cloud partner | AWS |
How Does the InfluxData-Litmus Architecture Deliver Context?
Litmus Edge provides connectivity to data sources across the shop floor and normalizes the incoming data streams, bridging the gap between OT and IT systems. InfluxDB 3 Enterprise handles high-cardinality situations, capturing contextual metadata alongside sensor data in real time. That combination means the data retains its meaning from source to storage.
The architecture splits workloads between edge and cloud based on what each environment does best. Real-time data at the edge feeds calculations, cleansing, and contextualization that serve the shop floor directly. A subset of that data flows to the cloud for longer-term predictive modeling, equipment optimization, and capital expenditure planning, without interrupting edge operations. The two layers operate in parallel, serving the shop floor and the enterprise simultaneously.
The deployment model starts with a minimum of three sites and one or two use cases, scaling to 30 or 300 sites without significant architectural change. InfluxDB compresses data and stores it in object storage, reducing the cost of multi-site analytics compared to replicating historian data to the cloud.
Where Do Historians Fit When Edge AI Arrives?
Manufacturers carry significant investment in existing data historians, and the InfluxData-Litmus architecture works alongside those systems rather than replacing them. InfluxDB handles the high-performance workloads where legacy historians fall short, particularly the higher fidelity, higher capture rates, and high-cardinality metadata that industrial AI requires.
The exception is greenfield. In renewable energy, where organizations build new infrastructure without legacy historian investments, the architecture serves as the primary data platform from day one. These deployments start fresh and capture contextualized data from the beginning.
One observation from the InfluxData and Litmus teams: what the next generation of use cases will look like in 18 to 24 months remains an open question. The architecture accommodates that uncertainty, giving manufacturers flexibility to adopt new capabilities without rebuilding their data layer each time.
InfluxData and Litmus exhibit together at Hannover Messe 2026.
This article is based on a video interview with Pat Walsh, CMO of InfluxData, and Will Knight, VP of Partnerships at Litmus, recorded with Lucian Fogoros of IIoT World at Hannover Messe 2026. AI tools were used to help summarize and organize the content. Reviewed and edited by the IIoT World editorial team.
Editorially independent. Sponsored by InfluxData.
Frequently Asked Questions
1. What is data context in industrial AI?
Data context is the metadata that gives a sensor reading meaning: which factory, production line, asset, work order, customer, and product it belongs to. Without context, a temperature value is a number. With it, that same value drives quality decisions and recipe corrections at the edge.
2. Which edge AI use cases deliver results fastest in manufacturing?
Use cases driven by latency, security, and cost move to the edge first. Scrap and quality monitoring requires near real-time decisions about product viability. Recipe optimization in process manufacturing demands immediate correction when a batch drifts off target. Both lose value when routed through the cloud.
3. Can InfluxDB work alongside existing data historians?
Yes. InfluxDB 3 Enterprise augments existing historians by handling the high-fidelity, high-cardinality workloads that legacy systems were not designed for. The exception is greenfield deployments, particularly in renewable energy, where organizations adopt the architecture as their primary data platform.
4. Why are shop floor teams gaining direct access to AI?
Historically, shop floor teams depended on a central AI group for advanced analysis. With AI capabilities running at the edge, production teams gain direct access to insights and can act on them in real time, without waiting for a centralized team to process the data.