This Control Talk column appeared in the November 2019 print edition of Control. To read more Control Talk columns click here or read the Control Talk blog here.
Greg: Innovation in process control tends to be missing more than ever despite the era of digitalization and incredible increases in automation system and modeling functionality. We seem to be not even aware of the opportunity to use process and automation system knowledge to make incredible advances in process performance. This is in such contradiction to greater possibilities and what has historically been demonstrated by industry leaders—and has been my operating mode for 40-plus years.
I was so fortunate that many of the world’s leading experts in Engineering Technology where I worked most of my career developed and fostered the approach of first understanding what the process was about and what the process needed. Greg Shinskey’s books were a great resource for the same reason. He started with the process and realized the main limit to what was possible with PID control was the process understanding of what was truly important. I was given the time and freedom to find, focus, deliver and publish what I felt would achieve the best performance. I spent a lot of personal time toward this effort. When you are in a plant, extra hours are the norm but it also became a way of life at home. All of my articles, books and columns are written and the test results generated on my own time.
While intelligently installing, making adjustments and confirming results in the plant are essential, focused experimentation in plants practiced decades ago is largely not permitted today because of strict management of change and production concerns. The stress of meeting tighter schedules and budgets with less resources and fear of upsetting the apple cart has resulted in a creativity crisis. The solution, to me, is what I have personally done for over 40 years, which is to use a dynamic model to free the mind to maximize the synergy of process and automation system knowledge. The digital twin that uses the actual control modules and operator interface in the DCS eliminates the considerable effort to emulate the DCS control algorithms.
Such emulations are really a guess, focusing by necessity on very abbreviated functionality because the algorithms are sophisticated and proprietary, reflecting many years of engineering development and testing by the DCS supplier. The digital twin eliminates this limitation and offers flexible and fast exploring => discovering => prototyping => testing => justifying => deploying => testing => training => commissioning => maintaining => troubleshooting => auditing => continuous improvement, showing the “before” and “after” benefits of solutions from online metrics. Even automated adaptation of the process model is possible without interfering with plant operation.
To give us insights into how we can maximize the synergy for creativity, we ask Chris Stuart, a software engineer in R&D/Simulation for Emerson Automation Solutions. Chris has an incredibly open mind and talent for seeing through the complexity of an opportunity to develop an elegant, flexible tool that can be used by anyone willing to invest some time in expanding their horizons. Like me, Chris feels most of the knowledge he has gained was in the building of models.
Chris, how did your classes and labs at Missouri University of Science and Technology (Missouri S&T) and your internship prepare you to be quickly productive in R&D?
Chris: One of the biggest factors for me was early exposure to dynamic modeling. I was fortunate to intern with a group of intelligent people doing dynamic modeling early, and to be part of a university cohort where our controls lab projects required the use of dynamic modeling. A dynamic model is the ultimate scratch pad. When you try something in the model, the only resource consumed is time. However disastrous or miraculous the results of whatever experiment you dream up, it’s easy to roll back and try something new. This freedom to experiment—to just try things—provided enough leeway to indulge my creativity. I think always being open to trying new things in a scenario with small risk is critical for R&D. Every experiment teaches you something if you let it. A model is the perfect laboratory—there’s no equipment to break or materials to waste. The understanding built along the way is as valuable as whatever new control strategies or systems an engineer might build, and every bit as real.
The second thing was developing the skillset to question everything. I was fortunate to have a professor who pushed his classes with high expectations and a rigorous drive toward excellence. I owe a great debt to his style of teaching. He would assign readings from challenging papers and mathematical derivations that forced us to confront not just the words on the page but also the assumptions and methodology behind them. We spent large amounts of time looking at the ideas from all angles—a superficial understanding was never enough to solve his homework problems. I didn’t realize it at the time, but this pushed us to question everything and be open to any number of ideas. It also taught use how to critically examine papers. Looking back, I see that this trait is invaluable. I earnestly hope he continues to push his students like this, and that other instructors will push for learning outside of textbooks. Lifelong learning is essential for R&D. When you’re looking at the output of a model or a control strategy, you must be willing to carefully examine the information. This means learning from things not only when they go right, but when they go wrong. It also means not always being satisfied with a solution just because it’s established.
The third thing is having managers who allow engineers the time and freedom to develop skills and insights. During my internships, I was lucky to have managers who gave me the breathing room to figure problems out for myself, as well as the support to look at things deeply. I am fortunate to still enjoy this type of support. Success in R&D depends not only upon the individual, but also the cultural environment. Being exposed to a good cultural environment fosters independent thought and risk-taking, which is critical for long-term R&D.
The final thing which helped my productivity in R&D was the willingness to acknowledge that there is an elegance to creating sophistication from simple (and often intuitive) building blocks. Often, the best solutions are carefully constructed from simple ideas. I think everyone in R&D tends to desire making things just a little bit more sophisticated. I was fortunate to see and help implement the thinking of established leaders in models, and while doing so, I noticed the common theme of their works was taking simple ideas and carefully layering them. In no small part, looking at models done by people like my coworkers Greg McMillan and Alex Muravyev show that the most robust and useful models usually are firmly grounded in simple (but complete) building blocks. When you add the simple pieces together, the model becomes sophisticated. Striving to not only accurately capture what is occurring, but also doing so elegantly, makes models that can stand up to wider varieties of conditions and more extreme numerical stress.
Greg: What are some of the new models for challenging processes and how do you make them easier to set up and use?
Chris: The bioreactor model is the largest and most sophisticated model dealing with the complexities and often unknown factors that affect cell health, viability and productivity. Key first-principle models have been developed for modeling dissolved oxygen, pH, cell growth rate and product formation rate. Fundamental equations have been developed to model the effects of concentration of components, such as dissolved oxygen, glucose, amino acid and byproducts, and the profound effect of pH and temperature. The kinetic parameters are so much more practical and easier to adjust than what is traditionally given in the literature that an Excel file can be used to estimate the parameters from the datapoints of batch profiles, particularly the profile slope at the midpoint of the batch cycle. When working with such a complicated process at the outset, using a model that is straightforward to reason about, drastically improves the ability to gain knowledge. The temperature and pH parameters are readily discernable as the optimum value and the low and high operating limits.
The mass transfer parameters can be estimated from these same profiles, including the amount of air and oxygen flow requirements as the batch progresses. The profiles of total cell mass and viable cell mass can be computed from profiles of turbidity and dielectric spectroscopy. Profiles of product concentration can be obtained from profiles of near-infrared spectroscopy and chromatography analyses. Oxygen uptake rate from accurate air and oxygen flow profiles, and mass spectrometer analysis of oxygen in off-gas using yield factors, can supplement the parameter estimation needed for the model to match the computed or measured profiles. If there is glucose and amino acid control, the profile of manipulated glucose and amino acid feed rate can further improve the parameters. Even if the models are rough at first, they are much better than traditional bioreactor models used in industry and can be non-intrusively improved and potentially automatically adapted online.
Since the batch cycle time can be 10 or more days and the value of a batch $10 million or more for biologic products, the knowledge of what affects batch quality and what can be done to minimize bad batches should be a motivation for seeking the synergy of modeling and control. The gains are even greater if the digital twin is used in process R&D to provide the best process before design and certification of a plant production unit.
Since cell growth rate and product formation rate can be adversely affected by just a tenth of a pH or temperature deviation from the optimum, the value of better pH measurement and control can be easily justified. The extension of the charge balance to include the dramatic effects of dissolved carbon dioxide and conjugate salts provides an accurate, easily set-up pH model.
Since the bioreactor process is so slow, the ability to speed up batch models greatly helps in diagnosis of what helps or hurts batch performance. Since the effects of process conditions are multiplicative, and the number and spectrum of batches is very limited, depending on data analytics for deciphering true causes and effects, rather than correlations, is unwise. Exploration by running a digital twin much faster than real time offers an incredible increase in process knowledge.
On the other end of the spectrum of process response speed is surge. The compressor surge model is valuable because the phenomenon is incredibly fast and destructive (expensive) to the point, like bioreactors, where much of what is really happening is left to the imagination. The compressor characteristic curves to the left of the surge point and why the suction flow can reverse in less than 0.03 seconds is a mystery to most. From the characteristic curves given by the compressor manufacturer, a model can be developed with a momentum balance that uses curves estimated for operating points in surge, capturing the speed and amplitude of the flow oscillations. The resulting dynamic model shows whether the instrumentation and control system speed of response and strategy will keep the compressor out of surge for even the most difficult situations, preventing a loss in efficiency that increases with the number of surge cycles and production loss from upsetting downstream users. Long term, these models can be made more accessible by providing tools for fitting data and maintaining parameters that are easy to reason about and find from data.
Greg: What are some of the blocks developed to model automation system dynamics?
Chris: It’s important when constructing a model to keep in mind not only the process itself, but also the dynamics of the automation system. The key blocks for measurement dynamics are Analyzer, Dead Time, Filter and Sampler. The Analyzer block models the cycle time, analysis time, sensitivity and resolution. The Filter time constant must be changeable as a function of operating conditions, installation, fouling and deterioration of the sensor. The time constants of thermowells can range from four to 100 seconds. The time constants of pH electrodes can range from two to 400 seconds. The Dead Time block is used primarily for transportation delay of the process to the sensor, which is a function of velocity and distance. The Sampler block can model the dynamics of a wireless transmitter by including the effects of sensitivity (trigger level), refresh time (default update rate), and wakeup time (trigger update rate).
Important blocks for modeling the complex dynamics of a manipulated flow include the Dead Time block to model the transportation delays for a manipulated flow to get into the process. Reagent injection delays can be huge for pH control (e.g., 30 minutes) due to low liquid flows (e.g., 2 gph) and typical dip tube volumes (e.g., 1 gal). The dynamics of a variable-speed drive (VSD) or control valve are modeled by the use of the Dead Time and Filter block to account for positioner and speed controller dynamics, and the Backlash Stiction and Slew Rate blocks to account for valve and VSD deadband, resolution and rate limiting. Most people don’t realize that the resolution of default VSD I/O cards can be rather poor (e.g., 0.35%), the deadband quite large (e.g., 1%), and the rate limiting incredibly slow (e.g., 1% per second) in a VSD setup in a misguided attempt to reduce motor load and response to noise, not realizing the consequences in poor control. The Interpolation block can model valve and VSD flow characteristics. Most people don’t realize that a VSD installed flow characteristic can be quite nonlinear, similar to a quick opening flow characteristic, due to excessive slip aggravated by not using pulse-width-modulated inverters and seeking to minimize frictional losses, resulting in a small ratio of system pressure drop to static pressure. All of these factors contribute greatly to the dynamics of the system. Having the ability to model them effectively allows higher-fidelity computations that drive more critical insights.
Greg: What blocks were developed to identify dynamics and metrics to improve process performance?
Chris: Perhaps most important is the Future Value block. The Future Value block provides a continuously updated rate of change with good signal-to-noise ratio for each model execution. It’s extremely useful for computing the slope of cell and product concentration profiles (the cell growth rate and product formation rate) and the slope of profiles for any type of batch operation. The Future Value block uses the rate of change to predict a value that is one deadtime or more into the future to enable preemptive evaluation and adjustments to improve batch operation, typically at the midpoint and endpoint of the batch cycle. The Future Value block can be used to predict compressor surge and identify operating points close to the surge curve from the rate of change of pressure rise for a rate of change of suction flow. The results of a Future Value block can be used to improve the setpoint response and operator interface of most loops by showing where a key process variable will be one to one and a half deadtimes into the future. The versatility of such a tool has nearly limitless applications.
The PID Performance block can provide metrics for all loops to decide whether changes in dynamics and tuning are productive. The block will automatically capture the peak and integrated error for load disturbances and the rise time, overshoot, undershoot and settling time for setpoint changes.
Greg: What can we do to open minds to see that the investment in time and money is less and the value is greater than expected when going beyond the simple dynamic models traditionally used for training on and testing of operator interfaces?
Chris: We need to emphasize to decision-makers that efficiency gains are driven through experience and innovation. Dynamic models provide a safe environment for simultaneously gaining experience and developing innovation. Although a simple model by itself can deliver on its time and money investment many times over in operator training (building operator experience), models with greater depth provide something else altogether: a workbench for innovation. The realization that a dynamic model can interact directly with an offline control system is an epiphany that few engineers get to utilize. All processes strive to run more efficiently, but the limiting factor in innovation usually comes down to risk—time and money. With a good model, an engineer can try dozens of configurations—sometimes even simultaneously. When a configuration goes awry in the physical plant, the results can be disastrous or expensive. In a digital twin, even a failure provides value—not only does it show the engineer or operator how not to do something, it provides insight into how the process itself works (experience without the risk). In production, such failures can overshadow any new information that the engineers and operators learn. I think decision-makers and technical experts should think of dynamic modeling as a low-risk proving ground for innovation. The freedom to try radically different control strategies can pay huge dividends in process efficiency. Now, all models will deviate from reality to an extent, and the results will need to be judiciously applied. But the more effort poured into a model, the closer those results translate into actionable control strategies and process knowledge.
Greg: For much more on the opportunities to improve process performance by greater synergy achieved with the digital twin, pH and surge models and key automation system blocks (Analyzer, variable Dead Time, Backlash Stiction, and Future Value), see the articles: “Best practices for PID,” “Compressor surge control: Deeper understanding, simulation can eliminate instabilities,” “Virtual plant virtuosity,” “Valve Response - Truth or Consequences,” “Improve pH Control,” “Maximize pH Response, Accuracy and Reliability.” “Get the Most Out of Your Batch,” and “Don’t Over Look PID in APC.” For a greater understanding of how automation system dynamics are the largest source of dead time and the primary limitation to control system performance, see my ISA Process Industry Conference presentation, “Challenges and Recommendations to Improve Instrumentation Response for Better Loop Performance.”
10. We only need a process engineer to help in startup.
9. We will get a I&C engineer someday.
8. We don’t need a digital twin with dynamic models—automatically generated tiebacks can do it all.
7. The automation system design must be done before the process design.
6. Field switches and field controllers are best (the supplier says he has college degree, so trust him).
5. The project cost and schedule are provided by accounting.
4. Rely on the packaged equipment supplier to select and specify instrumentation.
3. Go with lowest bid on packaged equipment.
2. Plant productivity is what it is.
1. Who needs modeling and control—IioT will do it all.