For two decades, manufacturers have tried to make analytics, machine learning, and AI work with industrial data, and for most of that time the bottleneck had nothing to do with the technology itself. It was the data layer. Now that AI adoption is accelerating, that gap is becoming harder to ignore. According to HighByte Co-Founder and Chief Communications Officer Torey Penrod-Cambra, manufacturers recognize how important data contextualization is “to being able to work with industrial data for AI, analytics, and machine learning, for all those use cases they have been trying to accomplish for the last 20 years.”
That recognition is driving a shift across the industry. Data contextualization, the process of adding structure and meaning to raw sensor data so downstream applications can interpret it, has moved from optional to prerequisite for any AI or ML deployment. HighByte heads to its fourth Hannover Messe with over 330 pre-booked meetings, three customer case studies from Alcon, Bayer, and National Grid, five technology partners including Siemens and Snowflake, and a forthcoming release of HighByte Intelligence Hub v4.4.
What Does the Siemens-Snowflake Demo Solve for Manufacturers?
At Hannover Messe, HighByte runs a live demonstration with Siemens, Snowflake, RapidMiner, and Mendix built around a CNC machine and focused on predictive asset maintenance. In this architecture, HighByte Intelligence Hub handles data contextualization through a Unified Namespace, while Snowflake stores the combined OT, IT, and business data, RapidMiner builds the ML models, and Mendix provides the custom application layer. The full pipeline requires multiple vendors working together, and that is precisely the point HighByte wants to make with this demo.
Interoperability between these systems is where HighByte’s value sits, connecting source and target systems so data flows with context intact.
Predictive asset maintenance was the starting use case because the impact is straightforward to measure: downtime reduction and asset availability gains that manufacturing teams can track from day one. But the architecture scales well beyond maintenance; the same data foundation that monitors CNC tool wear can feed quality analytics, energy optimization, or regulatory compliance.
Alongside the partner demo, HighByte shows three demonstrations at its AWS booth location: an industrial data fabric integrating AWS IoT SiteWise, IoT Core, and Amazon S3 Tables; a Pipeline AI Agent that uses Amazon Bedrock to assist with data pipeline configuration; and MCP Services that give AI agents direct access to industrial data.
i3X: A Direct API Into Factory Data
HighByte is one of the first companies to support i3X, a standard for industrial data interoperability developed through CESMII. HighByte CTO Aron Semle joined the i3X working group alongside engineers from both vendors and end users across the industry.
The standard takes the strongest elements of protocols like OPC UA and addresses their limitations, particularly for organizations building a Unified Namespace. Rather than replacing OPC UA or MQTT, i3X wraps and extends existing protocols, offering a direct API into factory data. For manufacturers who have already invested in OPC UA infrastructure, this means a path forward without ripping out what already works.
CESMII plans a major announcement during the week of Hannover Messe 2026.
Pharma to Utilities: Three Case Studies, One Data Architecture
The strongest evidence that data contextualization produces manufacturing outcomes comes from HighByte’s customer deployments. Three case studies span two theater stages at Hannover Messe, covering three different industries with the same underlying data architecture:
| Company | Industry | Use Case | Presentation |
| Alcon | Pharmaceuticals | Predictive maintenance, defect monitoring | AWS Theater, Tue Apr 21, 10:00 AM |
| Bayer | Pharmaceuticals / Life Sciences | Streamlined industrial AI pipelines with AWS | AWS Theater, Wed Apr 22, 4:00 PM |
| National Grid | Utilities | Data pipeline modernization for regulated operations | Microsoft Theater, Thu Apr 23, 3:00 PM |
John Harrington, HighByte’s Chief Product Officer, presents the Alcon and National Grid case studies. CTO Aron Semle presents the Bayer deployment.
Pharmaceuticals and utilities look like different worlds, yet these three companies faced the same underlying challenge: the data layer. Whether monitoring pharmaceutical production quality, streamlining chemical batch processes, or optimizing grid operations, the pattern repeats: raw OT data needs contextualization before any analytics or AI model can produce reliable results. The companies that solve the data layer first spend less time on data wrangling and more time on the use cases that generate value.
What Should Manufacturers Take From This?
Alcon, Bayer, and National Grid each invested in the data foundation before scaling AI. They contextualized data at the source, then connected it to analytics through a partner network of best-of-breed tools. Asked what she was most proud to showcase at Hannover Messe, Torey Penrod-Cambra said: “It is what customers do with it. That is what is important.”
All three started with a single, measurable use case and built the data architecture around it. Once the contextualization layer was in place, additional use cases followed without re-architecture.
HighByte appears at Hannover Messe 2026 as a Gold Sponsor at the AWS booth in Hall 15, Stand D76.
This article is based on a video interview with Torey Penrod-Cambra of HighByte and Greg Orloff from IIoT World, recorded just before Hannover Messe 2026.
Editorially independent. Sponsored by HighByte.
Frequently Asked Questions
1. What is data contextualization in manufacturing?
Data contextualization is the process of adding meaning, structure, and relationships to raw industrial data so that AI, machine learning, and analytics systems can use it reliably. Without context, factory sensor readings are numbers without meaning. With it, those same readings feed predictive maintenance, quality monitoring, and energy optimization.
2. How does the i3X standard differ from OPC UA?
The i3X standard wraps and extends existing protocols like OPC UA rather than replacing them. It offers a direct API into factory data and addresses limitations that manufacturers encounter when building a Unified Namespace. HighByte CTO Aron Semle joined the i3X working group at CESMII alongside engineers from both vendor and end-user organizations.
3. Why do manufacturers need multiple vendors for industrial AI?
The full data-to-insight pipeline spans automation, data storage, ML model building, and application development. HighByte’s Hannover Messe demo combines Siemens for automation, Snowflake for data storage, RapidMiner for ML models, and Mendix for custom applications, with HighByte Intelligence Hub handling data contextualization through a Unified Namespace.
4. What results have companies achieved with data contextualization?
At Hannover Messe 2026, HighByte presents case studies from Alcon (predictive maintenance and defect monitoring in pharmaceuticals), Bayer (streamlined AI pipelines with AWS), and National Grid (data pipeline modernization in utilities), demonstrating that the same data architecture scales across industries with different regulatory and operational requirements.