AI in Manufacturing Introduces a New Risk: Can You Trust What the Model Learned?

Why AI risk in factories is no longer just a cybersecurity issue

Manufacturers are beginning to use AI in areas that influence real operational decisions: quality analysis, anomaly detection, process optimization, and predictive maintenance. As this accelerates, a new category of risk is emerging, one that is fundamentally different from traditional IT or OT security.

The question is no longer only whether systems are protected from intrusion. It is whether AI models themselves can be trusted: how they were trained, what data influenced them, and whether that data was manipulated, incomplete, or biased.

Training data is becoming a critical attack surface

AI systems learn behavior from data. In manufacturing, that data reflects how processes run, how machines behave, and how decisions are made. If training data is altered, poisoned, or selectively biased, the model may behave incorrectly, without any obvious system failure.

This risk is difficult to detect because the model still appears to function normally. It produces outputs, predictions, and recommendations. The problem lies in how it learned to produce them.

For manufacturers, this introduces a subtle but serious concern: incorrect AI-driven decisions may not look like failures. They may look like plausible recommendations that gradually degrade performance, quality, or safety.

Provenance matters more than model sophistication

As AI adoption grows, many manufacturers consume models or AI services without clear visibility into their origin. Questions often remain unanswered:

  • What data was the model trained on?
  • Was that data representative of industrial environments?
  • Could it have been manipulated, intentionally or unintentionally?
  • Has the model been updated, retrained, or fine-tuned, and by whom?

Without clear provenance, manufacturers are asked to trust outcomes without understanding their foundations. In industrial contexts, where AI outputs may influence production parameters or maintenance actions, that trust gap becomes risky.

Attackers are already using AI, defenders must adapt

Another concern raised during the discussion is asymmetry. Attackers are already using AI to accelerate their efforts: generating malware, automating reconnaissance, and creating more convincing impersonation attempts.

At the same time, manufacturers are still learning how to deploy AI defensively and operationally. This imbalance increases pressure to understand not just how AI is used, but how it can be misused, especially in environments where digital decisions affect physical systems.

Why this risk is hard to govern with traditional controls

Traditional security controls focus on access, encryption, and segmentation. They are necessary, but insufficient when dealing with AI behavior shaped by data over time.

AI-related risks require different questions:

  • Who controls training and retraining?
  • How is training data validated?
  • How do we detect drift caused by bad or manipulated inputs?
  • When should a model’s recommendations be challenged or overridden?

These questions sit between cybersecurity, operations, and engineering. They do not belong neatly to one department, which makes them easy to overlook.

The 4-Step Checklist for Industrial AI Trust

Manufacturers do not need to solve AI model integrity overnight. But they do need to acknowledge that AI trustworthiness is becoming an operational issue, not just a technical one.

Early steps include:

  1. Verify Provenance: Demand transparency about training data and model updates.

  2. Secure the Foundation: Treat training data as a protected asset.

  3. Establish Governance: Define clear ownership and accountability for AI behavior.

  4. Monitor for Drift: Recognize that AI errors may be subtle, not catastrophic.

The manufacturers who address these questions early will be better positioned to scale AI safely. Those who do not may find themselves debugging outcomes they no longer fully understand.

Sponsored by Cybus

This article reflects insights from an IIoT World Manufacturing Day discussion on data sovereignty and industrial data access, sponsored by Cybus.
Contributors to the session included Peter Sorowka (Cybus), Marc Jäckle (MaibornWolff), Martin May (SCHUNK), Aleksandar Hudic (Schwarz Digits), with moderation by Lara Ludwigs (Cybus).