Factories generate massive volumes of time series data from every sensor, controller, and production line, but many manufacturers reduce the fidelity of that data or discard it entirely because their existing infrastructure makes retention too expensive or too difficult. “It always pains me to see customers reducing fidelity or throwing data away for resource or commercial reasons,” said Ben Corbett, Senior Sales Engineer at InfluxData, during an interview at Hannover Messe 2026. The data that gets discarded is exactly what anomaly detection, machine learning, and AI-driven workflows require: high-resolution, contextualized records of how equipment performs over time.
InfluxData and Litmus, an industrial edge data platform, announced a formal partnership at the same event, offering manufacturers a reference architecture for replacing the legacy historians that create this problem in the first place.
Why Do Manufacturers Reduce Data Fidelity?
Legacy data historians create a trade-off between cost and completeness. Manufacturing teams describe the same bind every week: expensive renewal cycles with their existing historian vendor, vendor lock-in that limits what they can do with the data, proprietary systems their engineers struggle to work with, and siloed architectures that isolate plant data from the rest of the organization. OSIsoft PI implementations are among the most frequently cited examples.
When storage is expensive and query performance degrades under high data volumes, engineering teams reduce how much data they keep and how long they keep it. That means fewer data points per sensor reading, shorter retention windows, and less context attached to each measurement. The result is a data foundation too thin to support the AI and machine learning use cases that those same teams are under pressure to deliver.
A broader shift is underway in how manufacturers approach this problem. Engineering teams find themselves caught between a pull toward modern use cases like anomaly detection and Industry 4.0 applications, and a push away from traditional technologies that represent either a bottleneck or a complete blocker to those ambitions. Modernization projects that sat on roadmaps for years are now receiving dedicated budgets, and companies are actively replacing their historian infrastructure rather than deferring it.
What Does InfluxDB 3 Change for Manufacturing?
InfluxDB 3 introduces a columnar storage engine that addresses the specific limitations driving data loss in manufacturing environments. The architecture carries four practical changes.
The columnar structure handles higher data volumes with smaller compute footprints. Scale has been one of the persistent limitations with traditional technologies in the manufacturing space, and the new architecture reduces that constraint. Manufacturers can retain the full sensor record without needing proportionally more hardware to query it.
InfluxDB 3 supports unlimited cardinality. The shift from a series-based database to a columnar structure means the number of tags and context attached to time series telemetry has no bearing on database performance. Engineers can attach richer metadata to their sensor data without worrying about query degradation, giving them more clues with which to run analysis. That matters directly for AI workloads that depend on contextualized data to produce accurate results.
Standard SQL joins InfluxQL as a supported query language. SQL compatibility opens integration with the broader ecosystem of analytics tools that plant engineers already use, rather than requiring specialized knowledge of a proprietary query language.
Cloud deployments are backed by object storage, which provides a cost-efficient, high-capacity persistence layer that allows manufacturers to store data for longer periods at higher scale. This directly addresses the retention problem that leads to data loss. The platform also offers what InfluxData calls “zero time to insight”: the moment a data point is written and receives an accepted response, it is immediately queryable, with no processing delay between ingestion and availability. Warehousing solutions, by contrast, can introduce lag between write and read.
How Does the Litmus Partnership Complete the Architecture?
A database solves the storage and query problem, but manufacturers still face a separate challenge: collecting and standardizing data from scattered industrial sources before it reaches the database. InfluxData has Telegraf, its open-source data collection agent, but Telegraf lacks specialization for industrial environments. The question of how to handle standardization, tag mapping, and contextualization across diverse equipment and control systems came up repeatedly in customer conversations and RFPs.
The Litmus partnership developed from organic market signals. Litmus started appearing in customer architecture diagrams, implementation partners were recommending the platform independently, and several existing InfluxData customers were already running both products. Litmus also uses InfluxDB as part of its own solution stack, which made the integration a natural fit.
The formal partnership positions the combination as a reference architecture: Litmus handles the collection, standardization, and contextualization of industrial data from disparate sources, while InfluxDB 3 provides the storage, query, and analytics layer. For manufacturers who need both halves of the data pipeline, it is a modular alternative to monolithic historian platforms.
| Capability | Legacy Historian | InfluxDB 3 + Litmus |
| Storage architecture | Proprietary, series-based | Columnar, backed by object storage |
| Cardinality handling | Performance degrades with high tag counts | Unlimited cardinality, no performance impact |
| Query language | Vendor-specific, proprietary | Standard SQL alongside InfluxQL |
| Data availability after write | Varies by implementation | Immediate queryability (zero time to insight) |
| Industrial data collection | Built-in, end-to-end | Litmus collects and standardizes from diverse sources |
| Integration methods | Proprietary APIs | Open SDKs in every major programming language |
| Data contextualization | Limited metadata before performance impact | Rich tagging without performance penalty |
Can This Architecture Replace a Data Historian?
InfluxDB 3 combined with Litmus represents a “build” alternative to the traditional “buy” approach of deploying an end-to-end historian platform, two components of a reference architecture for historian replacement or modernization aimed at organizations willing to assemble a stack rather than purchase a monolithic solution.
The integration model relies on open standards. InfluxDB 3 provides software development kits in every major programming language, and all integration methods follow industry best practices rather than proprietary protocols. Development teams can build application layers on top of the database to generate work orders, send alerts, or integrate with ERP and MES systems without vendor-specific connectors.
The practical outcome is a data foundation that retains full fidelity, supports standard query tools, and gives engineers and data scientists the complete time series record they need for AI and machine learning workflows. Instead of discarding data to fit the constraints of a legacy system, manufacturers can keep everything and query it with the tools their teams already know.
Sponsored by InfluxData.
This article is based on a video interview with Ben Corbett, Senior Sales Engineer at InfluxData, and Lucian Fogoros of IIoT World, recorded at Hannover Messe 2026. AI tools were used to help summarize and organize the content. Reviewed and edited by the IIoT World editorial team.
Frequently Asked Questions
1. What is the difference between a data historian and a time series database?
A data historian is an end-to-end platform that bundles data collection, storage, and retrieval into one proprietary system, typically with vendor-specific query languages and storage formats. A time series database like InfluxDB focuses on the storage and query layer, using open standards like SQL and columnar storage for high-volume sensor data. The architectural difference matters because historians lock connectivity, storage, and tooling into one vendor, while a time series database is one component of a modular stack that can pair with specialized collection platforms like Litmus and standard analytics tools.
2. How does retaining full data fidelity improve AI outcomes in manufacturing?
AI and machine learning models depend on complete, high-resolution time series records to distinguish between normal equipment variation and genuine anomalies. When manufacturers reduce data fidelity or shorten retention windows to control historian costs, they remove the fine-grained patterns that predictive models need for accurate results. Architectures that support unlimited cardinality and cost-efficient long-term storage allow engineering teams to keep the full sensor record, giving data scientists more context and better training data for AI workloads.
3. What should manufacturers consider when evaluating historian replacement?
The key factors include whether the replacement uses open, non-proprietary integration methods; whether engineering teams can query data with standard tools like SQL; whether the architecture supports unlimited metadata tagging without performance degradation; and whether storage costs allow full data retention over the time horizons required for analysis. Industrial data collection and contextualization, handled by platforms like Litmus, is a separate challenge from storage and should be evaluated as its own component of the architecture.