The goal of Industry 4.0 and similar digital factory models is to customize manufacturing to “units of one,” so when a customer places an order it’s added to the production plan at the factory with all the options they desire. Integrating systems like this represents the highest level of a top-down approach to sending production targets to the shop floor.
If you don’t work in a factory environment, “units of one” aren’t realistic. That doesn’t mean integration with business networks or the “top floor” doesn’t happen. There’s already information that control systems can and do extract from the demilitarized zone (DMZ) as the basis for setting production and optimization targets. Despite the fact they may be entered manually by the operator, electrical demand to power plants or production targes are clear examples of setpoints influenced by external factors. Another example that’s often “read” directly is product quality, which results from laboratory samples reported by the laboratory information management System (LIMS) to the control system as a process variable (PV_ signal (with large lag) to adjust a regulatory control loop.
Most of us in the process control industries are more focused on getting data to flow the other way—from the shop floor up to the historian—in the DMZ. This is done so process engineers, as well as those in accounting, management, regulatory reporting, production planning or maintenance, can access process-related information, including device status and diagnostics.
Though diagnostic data from field devices has been available since digital communications such as HART and fieldbus technologies were introduced more than 25 years ago, many facilities still only use it for manual operations such as maintenance and configuration. I suspect one reason for this is concern about mixing up control and “maintenance” domains, which could affect security and bandwidth.
NAMUR, the European user association for automation technology and digitalization in process industries, developed its NAMUR Open Architecture (NOA) to address this problem. It creates a parallel architecture for extracting and processing maintenance and operations information through an open interface between “core process control” and “monitoring and optimization.” NOA includes plant-specific monitoring and operation modules, such as diagnostic alert management and device management. It also links to central monitoring and operations modules such as analytics and maintenance scheduling in a more IT-like business infrastructure. Because using this information isn’t in real-time and often isn’t updated every scan cycle, such an infrastructure is feasible.
Managing field devices, which is the goal of NOA and related operations technology (OT) infrastructures, is only a small part of the connected world vision. Maintaining the security and integrity of data packets is another requirement because if you don’t have confidence in the data, its use is also compromised. Data confidence is critical, especially for custody transfer or measurements with financial implications. (We’ll conveniently ignore personal data for now since within the OT world this should be minimal.)
Trusted ledger technology, which is similar to blockchain and cryptocurrency, is one way to warrant the integrity of a message through different networks. Unfortunately, most blockchain algorithms are too data-intensive or cloud-dependent for many users to feel comfortable with in a control system, but Internet of Things (IoT) devices, with their limited processing capabilities, drive the adoption of small-footprint alternatives. Trusted ledger creates an immutable database, which means information, once stored, can’t be deleted and any updates are permanently recorded, making it possible to track data back to its original source. For example, a flow computer measurement can be tagged to a trusted ledger, so the associated thread from the meter through to the final invoice, and potentially its payment, can all be linked together. This digital thread of end-to-end data integrity is one part of the digital twin fabric. One of the objectives of a digital twin model is to feed information from the real-world implementation back to the model for continuous improvement and refinement. This is another reason to get the OT and IT worlds connected.
We may not have reached the objectives of top-floor-to-shop-floor integration, but the pieces are being put in place to help us get there.