Having the blind wholesale goal of reducing variability can lead to doing the wrong thing that can reduce plant safety and performance. Here we look at some common mistakes made that users may not realize until they have better concept of what is really going on. We seek to provide some insightful knowledge here to keep you out of trouble.
Is a smoother data historian plot or a statistical analysis showing less short term variability good or bad? The answer is no for the following situations misleading users and data analytics.
First of all, the most obvious case is surge tank level control. Here we want to maximize the variation in level to minimize the variation in manipulated flow typically to downstream users. This objective has a positive name of absorption of variability. What this is really indicative of is the principle that control loops do not make variability disappear but transfer variability from a controlled variable to a manipulated variable. Process engineers often have a problem with this concept because they think of setting flows per a Process Flow Diagram (PFD) and are reluctant to let a controller freely move them per some algorithm they do not fully understand. This is seen in predetermined sequential additions of feeds or heating and cooling in a batch operation rather allowing a concentration or temperature controller do what is needed via fed-batch control. No matter how smart a process engineer is, not all of the situations, unknowns and disturbances can be accounted for continuously. This is why fed-batch control is called semi-continuous. I have seen where process engineers, believe or not, sequence air flows and reagent flows to a batch bioreactor rather than going to Dissolved Oxygen or pH control. We need to teach chemical and biochemical engineers process control fundamentals including the transfer of variability.
The variability of a controlled variable is minimized by maximizing the transfer of variability to the manipulated variable. Unnecessary sharp movements of the manipulated variability can be prevented by a setpoint rate of change limit on analog output blocks for valve positioners or VFDs or directly on other secondary controllers (e.g., flow or coolant temperature) and the use of external-reset feedback (e.g., dynamic reset limit) with fast feedback of the actual manipulated variable (e.g., position, speed, flow, or coolant temperature). There is no need to retune the primary process variable controller by the use of external-reset feedback.
Data analytics programs need to use manipulated variables in addition to controlled variables to indicate what is happening. For tight control and infrequent setpoint changes to a process controller, what is really happening is seen in the manipulated variable (e.g., analog output).
A frequent problem is data compression in a data historian that conceals what is really going on. Hopefully, this is only affecting the trend displays and not the actual variables been used by a controller.
The next most common problem has been extensively discussed by me so at this point you may want to move on to more pressing needs. This problem is the excessive use of signal filters that may even be more insidious because the controller does not see a developing problem as quickly. A signal filter that is less than the largest time constant in the loop (hopefully in the process) creates dead time. If the signal filter becomes the largest time constant in the loop, the previously largest time constant creates dead time. Since the controller tuning based on largest time constant has no idea where it is, the controller gain can be increased, which combined with the smoother trends can lead one to believe the large filter was beneficial. The key here is a noticeably increase in the oscillation period particularly if the reset time was not increased. Signal filters become increasingly detrimental as the process loses self-regulation. Integrating processes such as level, gas pressure and batch temperature are particularly sensitive. Extremely dangerous is the use of a large filter on the temperature measurement for a highly exothermic reaction. If the PID gain window (ratio of maximum to minimum PID gain) reduces due to measurement lag to the point of not being able to withstand nonlinearities (e.g., ratio less than 6), there is a significant safety risk.
A slow thermowell response often due to a sensor that is loose or not touching the bottom of the thermowell causes the same problem as a signal filter. An electrode that is old or coated can have a time constant that is orders of magnitude larger (e.g., 300 sec) than a clean new pH electrode. If the velocity is slightly low (e.g., less than 5 fps), pH electrodes become more likely to foul and if the velocity is very low (e.g., less than 0.5 fps), the electrode time constant can increase by one order of magnitude (e.g., 30 sec) compared to electrode seeing recommended velocity. If the thermowell or electrode is being hidden by a baffle, the response is smoother but not representative of what is actually going on.
For gas pressure control, any measurement filter including that due to transmitter damping generally needs to be less than 0.2 sec, particularly if volume boosters on a valve positioner output(s) or a variable frequency drive is needed for a faster response.
Practitioners experienced in doing Model Predictive Control (MPC), want data compression and signal filters to be completely removed so that the noise can be seen and a better identification of process dynamics especially dead time is possible.
Virtual plants can show how fast the actual process variables should be changing revealing poor analyzer or sensor resolution and response time and excessive filtering. In general, you want measurement lags to total up to being less than 10% of the total loop dead time or less than 5% of reset time. However, you cannot get a good idea of the loop dead time unless you remove the filter and look for the time it takes to see a change in the right direction beyond noise after a controller setpoint or output change.
For more on the deception causes by a measurement time constant, see the Control Talk Blog “Measurement Attenuation and Deception.”