Greg: In this month’s column, I continue a conversation with my “mentor self” (see part one in April ‘23). In part two, I delve into the opportunities to increase a plant’s bottom line by taking advantage of what “the mentor” wrote for Chapter 11, “Improving Process Performance,” in “Process/Industrial Instruments and Controls Handbook, Sixth Edition,” from McGraw-Hill, and in “Advances in Reactor Measurement and Control” from the ISA.
Mentor: The performance of a process often depends on missing information about stream composition. The instrumentation, design, installation and maintenance costs of analyzers, and the delay due to cycle time they can cause, discourage production plants from investing in their widespread use. Often, a plant depends on lab analysis, and if analyzers are installed, they are most likely on the product stream.
There’s an opportunity to increase the performance of individual unit operations by using inferential measurements of stream composition. These measurements enable the calculation of online key performance indicators (KPIs) related to efficiency that are produced by the unit operation, such as the ratio of cost of inputs (e.g., running average of mass and energy used multiplied by cost) to the value of the output component (e.g., running average of component mass produced multiplied by monetary value). Since the composition of raw materials, intermediates and products can vary significantly, inferential measurements of stream composition are essential for KPIs to be representative of the process performance. If cost and value are too proprietary to be displayed, the KPIs can be based on the running average of masses.
A running average, also known as a moving average over a representative period, can reduce noise and inverse response, which provides a more consistent response to changes in process performance. In a running average, the newest value of the metric replaces the oldest value. For large periods, a dead-time block with the dead-time set equal to the period of the running average efficiently saves old values. The output of the dead-time block is then the old value to be replaced with the new value that is the input of the dead-time block.
The period for metrics must be large enough to eliminate noise and inverse response with a size to enable decisions based on objective and process type. For evaluating operator and control system actions, the period is normally the batch cycle time and operator shift for batch and continuous processes, respectively. The period is a month for correlation with accounting metrics. To quickly alert operators to the consequence of actions taken (e.g., changing controller setpoint or mode), the period can be reduced to be as short as six times the total loop dead time. The metrics at the end of a month, batch or shift is historized.
There is often a tradeoff between process metrics. Increasing production rate often comes at the cost of decreasing efficiency. Changing production rates reduce process efficiency, and may reduce process capacity since movement to the new process operating point takes time and the product produced in the transition may not meet specifications.
Increases in yield (decrease in raw material use) can be taken as an increase in process efficiency if the raw material feed rate is decreased. There may be an accompanying decrease in the cost of recycling and waste treatment operations. Or, increases in yield can be taken as an increase in process capacity by keeping the raw material feed rate constant. Prolonging a batch can improve yield and efficiency, but the lengthening of batch cycle time translates to less batch capacity, particularly as reaction rates or purification rates decline near the end point. The time it takes to reach a new setpoint can be shortened by overdriving the manipulated variable past its final resting value.
For processes on large volumes, such as distillation columns and reactors, time reduction is critical. For batch processes, reaching a new composition, pH, pressure, or temperature setpoint is often impossible without overdrive. The process efficiency is reduced during overdrive, but the process capacity is increased, either as a reduction in batch cycle time or an increase in continuous production rate upon reaching setpoint.
The translation of online metrics to the bottom-line effect on production unit profitability in the plant accounting system is especially important. This means benefits must be reported monthly and presented per accounting format and procedures.
Greg: What are our opportunities for inferential measurements?
Mentor: The opportunities are extensive. Step response models have a proven track record in model predictive control (MPC). Step response models can take advantage of MPC software used to identify the dynamics. The process inputs affecting the inferential measurement are stepped in both directions. Typical inputs for biological and chemical processes, besides accurate mass flow measurements, are accurate stream and equipment temperature, conductivity, dielectric spectroscopy, dissolved oxygen, dissolved carbon dioxide, pH and turbidity measurements.
Analyzer measurements are predicted for comparison with the analyzer results. The MPC inputs are independent variables. "In model identification, controllers whose manipulated or controlled variables affect the inferential measurement must be in the manual mode." For example, a distillation column temperature controller that manipulates a reflux-to-feed ratio would need to be in manual during the identification of the dynamics of an inferential measurement of overhead or bottoms composition. The dynamics identified are used to synchronize and correct inferential measurements with analyzer results. The inferential measurements used for process control provide a new update without the analyzer dead time offering much tighter control.
MPC matrix condition number and analysis can help eliminate correlations between inputs to ensure the MPC inputs are independent variables. The model dynamics are updated online based on operating point to account for changes in process dynamics. These inferential measurements are commonly referred to as dynamic linear estimators. Updating the open-loop gain may remove the linear restriction.
Step response models use an open-loop gain, total-loop dead time, and primary and secondary time constants. An open-loop, steady state process gain is used for processes that decelerate to a steady state because of negative feedback in the process. An open-loop integrating process gain is used for processes that ramp because there is no feedback in the process. An open-loop runaway process gain is used for processes that accelerate because of positive feedback in the process.
The input is a change in correction variable and the output is the change in the process variable. The models identified by MPC software include the effect of valve or variable speed drive and measurement dynamics. The process dead time is a total-loop dead time, including the dynamics of the automation system. The step response model used for closed-loop control has the analyzer dead time subtracted.
The new value of the process variable converted into engineering units includes the analyzer dead time and is compared to the actual analyzer result. Analyzer results used for correction are screened and rejected if not feasible. The new value for the step response model output subtracted from the analyzer result is the error in the inferential measurement. The error is multiplied by a factor much less than one (e.g., < 0.4). The resulting fraction of the error is added to the inferential measurement with and without the analyzer dead time.
Greg: How can first principles process simulations help?
Mentor: Open-loop tests can be conducted more extensively using process simulations. These simulations can help fill in the blanks, particularly when it comes to process gains. A significant untapped opportunity is an adapted real-time dynamic, first-principles model to provide unmeasured compositions. The first principles model can be adapted by an MPC, which has targets that are actual controller outputs, controlled variables that are simulation controller outputs, and manipulated variables that are first principles simulation parameters. The MPC for adapting the real-time simulation can be developed in an unintrusive manner by open-loop tests of a separately running dynamic simulation.
Greg: What are the opportunities for neural networks and projection-to-latent structures (PLS)?
Mentor: Neural networks could benefit from inputs identified from principle-component analysis (PCA) used for PLS. Neural networks and PCAs with dynamic fidelity, which are possibly achieved by a multivariable autoregressive (ARX) model that includes more than delays on the process inputs for dynamic compensation, could offer insights and fill in the blanks for potential inferential measurements.
A design of experiments with key process controllers in manual mode would be conducted. If controllers can’t be operated in manual mode, controller outputs must be in the test data for generating the model to include the transfer of variability from the controlled variables to the manipulated variables. The dynamics applied to model inputs should be updated online based on operating point. Users should be able to drill down to see hidden layers in neural networks. Reversals of process gain sign and outlandish predictions by neural networks beyond test data must be prevented.
I see the main benefit of neural networks and PLS as identifying unsuspected relationships that can be further explored by first-principles process simulations and quantified by inclusion in MPC tests used to determine open-loop, step-response models. Signal characterization could be used to deal with nonlinear relationships identified by neural networks.