Three practical use cases for Industrial DataOps

  /  Industrial IoT   /  Connected Industry   /  Three practical use cases for Industrial DataOps

Three practical use cases for Industrial DataOps

Most manufacturing companies realize the benefits of leveraging industrial data to improve production and save costs but scaling up their pilots and small-scale tests to the plant-wide, multi-plant, or enterprise level remains challenging. Reasons for this include the time and cost of integration projects, the fear of exposing operational systems to cyber-threats, and a lack of skilled human resources.

The difficulty of integrating data streams across applications in a multi-system and multi-vendor environment is the root of these problems. Standardizing data models, flows, and networks is hard work. A typical factory can have hundreds of data sources distributed across machine controls, PLCs, sensors, servers, databases, SCADA systems, and historians—to name a few.

Industrial DataOps provides a better approach to data integration and management. It provides a software environment for data documentation, governance, and security from the most granular level of a machine in a factory up to the line, plant, or enterprise level. Industrial DataOps offers a separate data abstraction layer, or hub, to securely collect data in standard data models for distribution across on-premises and cloud-based applications.

These three use cases illustrate how Industrial DataOps can integrate your role-based operational systems with your business IT systems as well as those of outside vendors.

1. “I need to accelerate and scale an analytics deployment.”

Let’s say you have several injection molding lines and want to run analytics comparing 20 data points from each line to measure KPIs, perform OEE, and run analytics to optimize performance across the fleet. But let’s say the machinery was purchased decades apart from different vendors, and the controls are from various vendors and have been modified and customized over the years, as is often the case.

Despite efforts to standardize and integrate critical aspects of this infrastructure, the context and data structures vary. Even if they all use pressure, temperature and optical sensors, the vendors, technologies, communication protocols and even units of measure vary.

Instead of embarking on a costly and downtime-inducing rip-and-replace project, or writing custom code to massage the data, process or controls engineers can connect their machines’ OPC UA tags to standard information models in an Industrial DataOps hub. The hub might run on a variety of platforms at the edge, from a single-board IoT gateway, Raspberry Pi, industrial switch, or any Linux device up through Windows 10 and Windows Server Platforms. For scalability, isolation, and security, hubs could be installed at the machine, line, or facility level.

Now, those injection molding machines have streamlined, optimized data that Operational Technology (OT) can easily hand-off to both local systems on the edge network and data scientists working in cloud-based systems—reducing cloud ingest costs and accelerating analytics.

2. “I need remote facility visibility and want to perform multi-site analysis.”

In industries such as pulp & paper, data flows from multiple sites vary broadly from “wet” continuous processes to hybrid batch and discrete packaging processes. The same goes for industries such as specialty chemicals and food & beverage.

To meet the challenge of integrating data from multiple systems across multiple plants, many companies maintain a centralized, corporate Engineering and IT team. This team needs access to data to monitor, maintain, and optimize assets to meet their enterprise-wide goals.

To achieve this level of performance analysis, the corporate group defines uniform models and sends them to the distributed plants, which can then install them in an edge native Industrial DataOps hub.

Engineers map their local data points to the standard models as systems are modified or added. If a new plant is acquired, data can be easily mapped to the models as well. As a result, the company avoids the downtime caused by traditional/legacy methods or rip-and-replace projects.

Now, operational users are able populate data models and make connections without writing a single line of code, and data scientists receive uniform, high-quality data. Analytics cycle time accelerates, and enterprise-level digital transformation gains the momentum it has previously lacked.

3. “I need to distribute industrial data to multiple business systems.”

Manufacturing companies need data to flow not just vertically from real-time systems to the front office, but across facilities and enterprises. These systems include SCADA, MES, ERP, laboratory/quality systems, asset/maintenance systems, cyber-threat monitoring systems, custom databases, dashboards, spreadsheet applications, and of course the IIoT infrastructure that enables analytics, machine learning, and AI investigations.

For decades, integration has been achieved through APIs and custom scripting/coding from application to application, instead of to a unified environment through which all data sources flow. This approach to APIs buries the code inside applications, making integrations hard to maintain. Inevitable changes to products, automation, and business systems can “break” integrations, resulting in undetected bad or missing data for weeks or even months.

Industrial DataOps prevents such breakdowns from occurring because integrations no longer hide in custom code between applications; they are all maintained through a solution that provides a common abstraction layer.

Now, companies have a faster, easier, and more robust way to establish and maintain their many integrations with a solution that provides data visibility, maintenance, documentation, governance, and security.

About the author

John HarringtonThis article was written by John Harrington, the Chief Product Officer at HighByte, focused on product management, customer and partner success, and company strategy. His areas of responsibility include market research, customer use cases, product priorities, go-to-market, and financial planning.