This Control Talk column appeared in the February 2022 print edition of Control. To read more Control Talk columns click here or read the Control Talk blog here.
Greg: In Part 4 we introduced you to some key insights on loop and process performance and optimization revealed in the ISA 5.9 Technical Report on the proportional-integral-derivative (PID) control algorithm. In part 5, we start with an interview view with Yamei Chen and Cheri Haarmeyer on the practical issues of making an ISA Technical Report a reality, then talk with James Beall about some key benefits of the report. We then move to summaries and supporting information, starting with my summary of signal characterizer benefits and moving on to a perspective of significant simulation opportunities offered by Dr. Russ Rhinehart, emeritus professor at Oklahoma State University School of Chemical Engineering, who has developed practical methods for nonlinear modeling and optimization in the process industry. We finish up the appendices with my perspective on simulation and key knowledge on valve positioners, dead-time compensators and enhanced PID control.
Yamei Chen, what are the challenges of getting participation?
Yamei: Everyone is so busy, but the members are enthusiastic and willing to participate when prompted. I helped keep the effort going by holding Team Meetings every one or two months based on availability of key participants to review the latest additions and improvements. Now that the report is nearly complete, meetings will be every two or three weeks to review comments and vote on changes.
Cheri Haarmeyer, what did you have to do to get the report into the IEC format?
Cheri: The documentation of the IEC format is several hundred pages. Fortunately, Nick Sands provided much needed guidance to get me started. While the effort was extensive, the IEC format provides consistency with other reports in the ISA Standards and Practices series, and facilitates review and management of change. Line numbers throughout help pinpoint proposed changes.
James Beall, what are some of the key benefits of this report?
James: This report clarifies the many alternate implementation PID methods and features so that the user can better take advantage of PID capabilities. Using the detailed explanations in the report, the user can correlate current non-standard vendor terminology to standard functionality. This enables the user to properly put standard features as well as optional features to work. For example, it will help with the basic task of properly calculating or converting tuning based on the PID Form, to understanding, and possibly modifying, the action of the PID output when it comes out of a limited condition. This greatly helps eliminate confusion and allows the user to understand and use optional features such as external reset feedback, feedforward, anti-reset-windup and many others. It is extremely helpful if one must understand the PID action from several different manufacturers! Hopefully, control system manufacturers will use the standard terminology and definitions from the report so that the implementation and features of their PID blocks are easily understood and used!
Greg: The appendices provide a concise summary of supporting knowledge not commonly found. Appendix A on Signal Characterization concludes with a summary of benefits. The obvious benefit is an increase in the linearity of the open loop process gain. There are additional benefits in terms of better identification of dynamics from step tests and the realization and moderation of the effect of nonlinearities seen by the PID offered by signal characterization, such as a lesser increase in dead time from resolution and backlash and a lesser decrease in process time constant from acceleration.
The improvement in identification is the result of the reduction in the effect of local slope changes on the open loop process gain (change in process variable (PV) / step change in controller output (CO)) for different step sizes. For small step changes, the open loop self-regulating process gain approaches the slope of the plot of PV versus CO but becomes quite different for larger steps spanning various changes in slope. For pH systems, the change in process gain with step size can be an order of magnitude or more. The improvement in process time constant originates from the moderation of acceleration and deceleration from changes in process gain, altering the process time constant observed. The improvement is dramatic for pH control of a strong acid and strong base that passes through neutrality (for example, 7 pH), where the acceleration approaching neutrality and deceleration departing from neutrality can increase by order of magnitude for each pH change. Without signal characterization, a true strong acid/strong base system’s 20-minute process time constant for a well-mixed vessel could be reduced to 0.04 minutes for a 6 to 12 pH change.
The reduction in dead time by output signal characterization from resolution and backlash stems from not having to tune the PID with a smaller gain for the steeper portions of the installed flow characteristic for valves and variable frequency drives. A higher PID gain translates to a greater CO rate of change for a given disturbance that reduces the time to get through the resolution and backlash dead band. The amplitude of a limit cycle from a resolution limit is proportional to the open-loop process gain that is the product of the valve gain, process gain and measurement gain. Consequently, the amplitude from a resolution limit will be reduced in the regions where the process gain was high. This is particularly important for pH control involving steep titration curves.
Additionally, there are operational and maintenance benefits. The reduction in variability seen increases engineering and operations confidence and appreciation of signal characterization. For better operator understanding and maintainability, the characterization is best done in the main control system rather than in field devices with the original, uncharacterized signals clearly visible.
Appendix B of the report provides some insights as to dynamic simulation. We start with Russ Rhinehart’s perspective on the major types of simulations and their many options.
Russ: Models of the process are used for design, training, optimization, analysis, exploring alternate control structures and tuning. These can be steady-state models that indicate the settled state or response; transient-state models that indicate how the states evolve in time to the final steady state; and integrating or runaway process models whose open-loop response will ramp or accelerate, respectively, without there being a steady state.
There are first-principles models (alternately called mechanistic, or phenomenological), which can be purchased from a modeling software provider or developed in house. In any case, the idealizations in the models (friction factor, reactivity, tray efficiency, heat transfer coefficient, transport delay, etc.) will not exactly match the process; so the model should be adjusted to best fit the process. To do so, collect operating data, then adjust model coefficient values to, for example, minimize least-squares deviation. Although this optimization can be automated, human intuition and choices regarding which model coefficients should be changed—and which sections of the data are most important—often calls for human-guided model adjustments.
Steady-state, first-principles models with ideal dynamics adjusted for operating conditions to provide the path toward steady state are simpler to derive than the more rigorous transient first-principles models. Dynamic first-principles simulation software (e.g., Mimic) can eliminate the need to derive transient models.
If you are comfortable with qualitative opinion as to gain and interaction, then that could replace first-principles models. Balancing time and cost, this less perfect approach to modeling has significant utility.
First-order plus dead-time models often are used as the basis for controller tuning (including feedforward and dynamic decouplers). These are dynamic models are simplistic, empirical and linear but often are good enough to be useful. The classic method for obtaining first-order-plus-dead-time (FOPDT) model coefficients is the reaction curve approach: start with the process at steady state, make a step-and-hold in an input, and observe the process response. This approach assumes an initial and final steady state, no disturbances, linear response, and no confounding of the trace by noise. It also moves the response off target. An up-down-down-up input pattern balances deviations, and provides four steps about the nominal value to better see the response within noise and disturbances. Alternately, an input skyline pattern generates useful data faster, and with smaller upsets, and the FOPDT model can be best fit to the process response data with an optimization algorithm.
Empirical models can also be used. These include classic statistical power-series responses and neural networks trained to match historical data. Unfortunately, plant data will have a limited range, and the models may not extrapolate into alternate operating regions of interest. Further, long past data from different operating conditions, equipment, product, controllers or lab procedures may reveal inconsistencies between old and new data. Finally, it is difficult to match empirical model coefficients to items of interest such as gain and time-constants. For much more knowledge, see my book Nonlinear Regression Modeling for Engineering Applications, 2016, Wiley ASME Press Series.
Greg: I found it important to simulate integrating and runaway processes because of the additional challenges posed and the critical role they play in the temperature and composition control of batch processes, most notably chemical and biological reactors. My first-principles models all use ordinary differential equations (ODE). As noted in last month’s column, generic, simple ODEs show how process time constants and process gains can be identified, offering insight as to the lack of negative feedback and the extreme case of positive feedback and consequential bigger role of PID in providing negative feedback. Such processes have a window of allowable PID gains, where too small or too large of a PID gain causes instability. The low PID gain limit increases as the PID reset time is decreased. Extending the ODE to be based on first-principle models of material, energy and charge balances increases the understanding of process relationships. Appendix F in nearly all of my books in the last 10 years shows how these ODEs are developed and analyzed.
It is critically important that the model have all the dynamics of the automation system, including the installed flow characteristics, lags, dead band, resolution, and velocity limiting in control valves and variable frequency drives, dead time from injection delays, transportation delays, sample delays, analyzers, and digital device update and communication rates, and lags from sensors, transmitters and filters.
The digital twin with the actual control system configuration and operator interface downloaded offers many advantages. If the digital-twin controllers have the same setpoints and tuning as the actual plant, the fidelity in terms of the process model parameters and installed flow characteristics is seen in how well the manipulated variables in the digital twin match those of the actual plant. Mismatch in the transient response to disturbances and setpoint changes is frequently caused by missing or incorrect automation system dynamics in the digital twin.
Model predictive controllers (MPCs) can be set up to non-intrusively adapt a digital-twin model. The MPC models are identified by running the digital twin separate from the actual plant, testing the response of the control system manipulated variables to step changes made in key model parameters. Once the MPCs for adapting the model are ready, the digital-twin model is synchronized with the actual plant and run in real time. The actual plant’s manipulated variables become the targets, and the digital-twin manipulated variables become the controlled variables of the MPCs. The manipulated variables of the MPCs are key model parameters. Once the models are adapted, inferential measurements are potentially available for process control. For a bioreactor, these could be cell growth rate and product formation rate for profile optimization by the manipulation of glucose and glutamine concentration setpoints for fed batch control. This optimization can be done by additional MPC whose models can also be developed offline using the digital twin model. Key performance indicators (KPIs) are used to verify the performance of the MPC. The optimized setpoints for the actual control system are kept in the advisory mode until the KPIs prove the benefits are appreciable and sustainable from manually progressively using the setpoints.
Appendix C helps address critical and often not fully understood valve-positioner functionality needed to enable a PID to do its job. Positioners were originally high-gain, proportional-only pneumatic controllers that fit well the role of the positioner to provide an immediate and large response to the demands of the process controller. Some valve overshoot and oscillation are generally not a problem. The offset is negligible from these positioners since the offset is inversely proportional to gain. Integral action would eliminate the offset, but would require reducing the PID gain and would introduces a limit cycle from valve backlash and make the amplitude of this limit cycle and the one from resolution larger since limit-cycle amplitude is inversely proportional to gain. Modern day positioners often have integral action turned on by default, with the positioner detuned to give an overdamped response. This may make open-loop tests look better, with more linear and repeatable travel gain and the elimination of overshoot and offset.
A much greater problem is the introduction of cheap positioners with spools instead of sensitive relays and the use of tight shutoff valves designed for on-off service. These are typically rotary valves that seemed attractive due to large flow capacity, extremely low leakage, small piston actuators, and a much lower price tag. Since these valves were often already in the piping specification and valve specifications do not require the valves to quickly and precisely respond to small changes or have a more linear installed flow characteristic, rangeability is based on the deviation from theoretical inherent flow characteristic, neglecting the effect of backlash and resolution. Further, with projects emphasizing budget, on-off valves posing as throttling valves is a common mistake. Putting a smart positioner on these valves is still a dumb design because the readback is not indicative of actual closure member position (e.g., ball or disk) due to play in the in linkages and the shaft and stem connections. The use of diaphragm actuators sized for 150% or more of maximum thrust, smart positioners, elimination of linkages, splined shaft to stem connection, ball or disk with integrally cast stem, minimal seal friction, and ultra-low friction packing can make the rotary valve almost precise as its sliding stem globe valve counterpart traditionally used for throttling.
A common myth dating back 50 years still plagues us today. The myth is that fast loops should use a volume booster instead of a positioner. Not commonly stated is that piston actuators require a positioner. Rarely recognized is that replacing a positioner with a booster, which has a high outlet port sensitivity, on diaphragm actuators causes instability from positive feedback due to volume changes and subsequent pressure changes from diaphragm flexure. The instability can cause a butterfly valve requested to be open to slam shut. The positive feedback is seen in the ability to manually grab the valve shaft and move the ball or disk. I have personally witnessed this on several compressor surge and pressure control applications. Most of my publications in the last 20 years warn users about this potentially unsafe condition. If a valve needs to stroke faster, the solution is to put the booster on the positioner output and open the booster bypass valve just enough to stop high frequency oscillations by allowing the positioner to hook into the large actuator volume besides the small booster volume. If an even faster response is needed, a variable frequency drive with pulse width modulation, NEMA design B inverter duty, totally enclosed fan- or water- cooled (TEFC or TEWC) motor with class F insulation and 1.15 service factor, 12 bit I/O signal cards, and aggressive field torque to speed cascade control can provide fast and precise control.
A subsection has been added to introduce advances in digital positioner capability to improve valve response time, precision, and linearity to meet process control improvement objectives. Another new subsection deals with inappropriate use of the common term “stiction,” the causes and characteristics of behavior meant to be conveyed, and the quantifiable metric “resolution” used in ISA Standards & Practices for valve-response testing.
For much more on how to get the best valve response see the Control Talk column “Responsible control valve response” and the Control articles “How to specify valves that don’t compromise control” with its white paper “Valve response: truth or consequences” and “Is your control valve an imposter?”
Appendix D describes a dead-time compensator with performance similar to a Smith Predictor created by simply inserting a dead-time block into the external-reset feedback path to the PID with positive feedback implementation of integral action. This dead-time block delays any integral mode reaction to changes in the controller output. The user does not have to enter an open-loop process gain or open-loop time constant, and the original PID operator interface is retained.
There are several counterintuitive aspects to dead-time compensators not widely understood. First, the improvement in performance increases as the ratio of open-loop time constant to total-loop dead-time ratio increases. The performance is greatest for lag-dominant loops and least for dead-time dominant loops that are often cited as the reason to use a dead-time compensator. While the performance for lag-dominant loops is greater, the motivation is often less because the performance is greater due to a larger allowable PID gain. Second, performance is more sensitive to a model dead time too large than too small, causing rapid oscillations requiring a small filter on PID output, resulting in a slower response for the original tuning settings. Third, the model dead time must be updated, and the PID gain increased and/or reset time reduced to see a benefit. Otherwise, the performance without the dead-time compensator may be better assuming reasonable original tuning settings. If the model dead time is accurate to within 10% of actual dead time, depending on the tuning method, the reset time can be greatly reduced (for example, a factor of 10 smaller reset time based on dead time), and PID gain possibly doubled. The loop can be tuned as if the actual dead time is reduced by the dead-time compensation. The improvement in performance is greatest for setpoint changes since there is no delay in PID action as there is with an unmeasured disturbance where the actual dead time still delays seeing the effect of the disturbance.
Appendix E describes an enhanced PID (e.g., PIDPlus) using external-reset feedback that waits to execute until a setpoint is changed or a process variable update is detected beyond the measurement noise or repeatability limit, indicating a new analyzer or wireless result. The positive feedback filter for integral action uses the elapsed time from the last update to compute an exponential result for the integral-mode contribution from the updated filter input that is the new external-reset feedback signal. The derivative mode uses the elapsed time from the last update to compute the derivative mode contribution.
The enhanced PID provides more robust control, including protection against failures of the manipulated variable (for example, stuck valve or frozen secondary loop PV) and the controlled variable (for example, analyzer or wireless measurement) since external-reset feedback is used and the input to the one-time execution of the integrator is the actual manipulated variable.
The greatest robustness and increase in performance occur for a self-regulating process when the analyzer or wireless dead time exceeds the 63% response time of the process (for example, process dead time plus process time constant). In this case, the PID gain can be increased to be the inverse of the maximum open-loop self-regulating process gain. The reset time can be made as small as the maximum dead time without the analyzer or wireless device. Also, an increase in the dead time from the analyzer or wireless measurement does not require retuning. Offline analyzers can be used with extremely large and variable reporting times, but the ability to reject unmeasured disturbances deteriorates with increases in the time between analyzer results and wireless updates.
For integrating processes, robustness is increased, and the PID gain can possibly be slightly increased, but oscillations may start if wireless and analyzer dead time approaches one half of the arrest time or the original loop dead time, in which case the arrest time must be increased and the PID gain accordingly decreased.
You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new Process/Industrial Instruments and Controls Handbook Sixth Edition (2019 McGraw-Hill), which captures the expertise of 50 leaders in industry.
We complete this series of columns with some concluding remarks by Jacques Smuts and Nick Sands.
Nick: Get involved in an ISA Standards & Practices committee and help make the ISA technical reports and standards more useful by your input. It is one of the best ways to advance our profession.
Jacques: An online ISA course conveying the key knowledge to be gained from the ISA 5.9 Technical Report, with experiments using a simulation for different application conditions, would be a great way to move forward. The course and simulation could use the same terminology, forms, structures and metrics documented in the report.
3-1 Realize and characterize the changes in the timing and degree of response to your corrective actions (realize and identify process nonlinearities)
3-2 Realize the delay in a response will change as more systems and people get involved (model automation system components introduced delays and complexities)
3-3 Understand the fundamental principles and seek rational explanations of relationships in the response of systems and people (use first-principles models and learn cause and effect relationships)
3-4 To find the optimum response, get several players to experimentally search for the peak and get stuck in valley by worst result player jumping over best result player in search (use leapfrogging)
3-5 Perform the mode most immediate precise correction with the least backlash (use a precise throttling control valve with proportional only smart positioner)
3-6 Wait to make further corrective action until after you get the effect of past corrective action (use dead time compensation)
3-7 Wait on new corrective action until you get an updated recognition of response (use enhanced PID)
3-8 Wait on new corrective action until your get an update request (use enhanced PID)
3-9 Base your rate of change of corrective action on time between updated recognitions of responses (use enhanced PID)
3-10 Base your accumulation of corrective action on time between updated recognitions of responses (use enhanced PID)