Greg: Process control is the first line of defense for safe process operation. It keeps operating conditions away from the safety instrumented system (SIS) activation triggers, which create abrupt changes that aren’t just disruptive, but also potentially upsetting to other operations and equipment. Shutdowns and startups are often the most dangerous times of operation. Process control extends the life and productivity of equipment. Procedure automation per ISA technical report ISA-TR106.00.01 and ISA-TR106.00.02 uses process control loops to deal with the challenges of startups and transitions effectively and proactively. Modeling is the key to safe and effective procedural automation and continual process control performance and optimization.
To broaden our horizons and better understand how to make the most of the opportunities, we continue our conversation with José María Ferrer, who has more than 25 years of experience in the dynamic simulation and control of hydrocarbon processes, and presently serves as a senior advisor at Inprocess Technology & Consulting Group.
José, how good are models?
José: Hundreds of chemists and “data scientists” worked hard in previous centuries to discover the laws of how matter behaves. Those laws still exist, and they’re condensed as part of today’s simulation tools. In general, models are quite good since we trust them (steady-state model) to design running plants. Some models can also run in dynamics, when adding extra data such as sizes, geometries, valve characteristics and controllers. How good are those dynamic models? They’re as good as those steady-state models, but in dynamics. It means that if the components and the thermodynamic package are well known, you should expect good results.
In 2011, we built a dynamic model of a tricky C3 splitter for the advanced process control (APC) group at a refining company. The model looked good, but the APC engineers wanted to know how good it was. The best way to answer that question was to run such a model against historical data (imposing all the setpoints and boundary conditions into the dynamic model) during a key period, and compare the outputs of the plant. The results can be seen in “Reliability of dynamic simulation to reproduce plant dynamics.”
When we build dynamic models of existing plants, we follow five basic steps:
1. Draw a good process flow diagram of the model scope.
2. Collect and study historical data of all instruments in the model scope.
3. Talk extensively with control room operators and control engineers about plant issues.
4. Build the dynamic model using equipment data sheets and plant data.
5. Dynamically validate the model with historical plant data.
We can use such a model to study any plant issues, improve manual or automated procedures, or design APC and deep reinforcement learning (DRL).
Greg: Control loop dead time is frequently too small in dynamic models. They must include mixing and transportation delays and automation system dynamics. Besides the obvious large dead time from analyzers, which is the sample time plus 1.5 times the cycle time, there are many other sources of dead time originating in the automation system. The transportation time of the process fluid to the sensor and the equivalent dead time from sensor lag, transmitter damping, wireless update time, scan time, signal filters and final control element response time can be significant. The dead time from control valve lost motion, resolution and sensitivity limits is frequently overlooked. Since control loop tuning depends on dead time, and the minimum possible peak and integrated error is proportional to the dead time and dead time squared, respectively, getting the dynamic model dead time right is essential for improving process control.
What are the essential aspects of matching the model to the plant? Is it, or when is it, necessary? Is it done once and forgotten, or performed periodically? How frequently is it done? Should one employ automatic model online updating or as needed? What must be matched in the process? What specific measurements must be picked? When it comes to current (average) values and deltas, what do you do when you can’t get the fit you want or expect (with the standard parameters available)?
José: Current simulation tools can accommodate dead times and lag times. For relevant transportation times, you can include pipe segments or the equivalent dead time in the instrument of the model. You can also include measurement lag and transmitter damping, automation system scan time and analyzer cycle time, applied filters by the controllers, and actuator dynamics. This is important for fast dynamic systems such as compressors, anti-surge controllers (ASC), quick-closing valves of turbo expanders (QCV) or emergency shutdown valves (ESV) in safety scenarios. In systems such as large distillation columns with setting times of one day, you can neglect most of these dynamics in the order of seconds.
Matching plant data is a must. Models must be validated with historical data. They can be used for engineering analysis or as 24/365 real-time simulators (RTS or digital twins). We must be sure that all model-calculated variables match dynamically (second-by-second) all pressures, temperatures, levels, flows and analyzers of the model scope.
Validation is an iterative process. After the first run, you’ll discover that the model doesn´t follow certain variables, and you must determine why. Sometimes you must improve the model, but often you discover hidden issues of the plant, either in the instruments or in the equipment. For example, in “The X-files of a depropanizer” the model didn’t match plant data, and we discovered that a parallel condenser of a depropanizer gave only half of the designed duty.
When building online digital twin (or real-time simulation), key performance indicators (KPIs) are created to track model health and how well the model is tracking plant behavior. When any of those KPIs drift from their normal values, we study them again to find the cause of deviation, which could be a model fault, wrong assumption or a hidden issue not seen previously in the plant. Some model parameters can be automatically updated in the model, such as heat-exchanger fouling factors, flow transmitters’ material imbalances, feed composition estimations, installed valve characteristics, PID controller parameters, heat losses and solar radiation.
Greg: The fidelity of a dynamic model is shown by how well the controller outputs of the plant match those in the digital twin because control loops transfer variability in the controlled variables to the manipulated variables. Model-predictive control (MPC) can be used to adapt model parameters to improve delta twin virtual plant fidelity. The MPC targets and controlled variables are the actual plant and virtual plant manipulated variables (e.g., flows), respectively. Key model parameters are chosen as the MPC manipulated variables and flows of associated process controller outputs are chosen as the MPC controlled variables. An automated test sequence is run for the MPC at the highest possible speed offline, and the MPC models are identified and visually checked as reasonable in terms of the direction and relative magnitude of the effect. The digital-twin virtual plant is connected to the actual plant in a read-only nonintrusive setup. The automated adaptation of key model parameters proceeds.
For more on digital twin virtual plant use and adaptation, see the ISA books “New Directions in Bioprocess Modeling and Control: Maximizing Process Analytical Technology Benefits, Second Edition" and “Advances in pH Measurement and Control: Digital Twin Synergy and Advances in Technology Fourth Edition”.
José, what is a deep-gain analysis?
José: Over the last 20 years, I’ve seen some smart APC engineers use steady-state process simulation models to calculate controlled variable (CV) open loop gains that includes the manipulated variable (MV) gains in a given MPC. The open loop gain is the final change of the CV divided by the change applied to the MV.
The gain is calculated for all the operating range of the MV. So, for every pair CV-MV, a gain curve is obtained. If that curve is flat, you’re lucky because most of the MPC controllers assume that the process gains are constant. This is a basic gain analysis using a steady-state simulation model.
A deep gain analysis goes one step deeper. You calculate that CV-MV gain curve for all operation envelopes of the MPC, in other words, for all the potential states of the process. Therefore, for every pair CV-MV, you obtain a collection of curves depending on the values of all other MVs. This is relatively easy to automatically calculate using certain functionalities of current simulation tools in steady state. An industrial application of such deep gain analysis is described in “Dynamic simulation for APC projects: A case study on a reformate splitter with side draw”. I believe that doing this previous gain analysis for any new MPC implementation will help to better design the MPC from the very beginning.
Here is an Excel file of a typical deep gain analysis of a small MPC.
Greg: In tuning controllers, there is a tradeoff between robustness and tightness of control. Models can help find the nonlinearities that can possibly be addressed by adaptive tuning to reduce the robustness needed and increase the aggressiveness and consequential tightness of control.
What is optimum tuning and how can simulation help?
José: Having a plant with well-tuned controllers can save millions of dollars by avoiding onsite trips for equipment maintenance. I’m still surprised by how many plants still run with loops that aren’t well tuned.
Most of the current software tunning tools look at a single loop and a single state of the plant. Those tools represent the process as a “black box,” an isolated first or second order system.
The plant operation envelope has many states and loops that interact with other loops, so many important loops need to be studied to take a broader picture of the process, not just a single loop and single state.
Dynamic simulation models are excellent tools for calculating optimum tuning of the full plant. We’ve used them extensively for that purpose for new and existing plants. The tuning can be evaluated for all the different states of the plant and for the true perturbations given by the historical data fed to the dynamic simulation. Simulation models can typically run five to 100 times faster than real-time depending on the scope of the model to be tuned, so multiple combinations of tuning can be evaluated automatically with certain simulation scripts. However, the most important factor is to understand how the process behaves and the tradeoff of dumping the perturbation upstream or downstream of the loop.
Here is a simple Excel file I made 25 years ago to interactively tune a loop and visually understand the influence of each parameter.
Greg: I’m deeply grateful to the Control Talk participants of the past 22 years, who helped deal with the increasingly difficult challenge of sustaining and increasing awareness of the critical role of process control, possibly due to the lack of understanding by executives, publishers and technical organizations. Some process industry magazines no longer have articles on process control and many symposiums no longer have segments with process control in their titles. There are some sessions on process control buried in symposiums with process safety in their titles. I think the retirement of leading experts in process control is a contributing factor. Most the of the leading participants are retired who answered questions in the several hundred posts on my ISA Mentor Program Q&A website now renamed “Ask the Automation Pros”. We are kind of into a runaway reaction from the loss of expertise. While I’m now officially retired from my part-time job in simulation and process control improvement as a senior principal engineer, I’ll continue my effort to increase awareness of the importance of modeling and control through this Control Talk column.