Greg: I’ve previously advocated using dynamic simulation to drive process control improvements and train operators. But there’s another, sometimes overlooked, use case for getting the most out of control system migration projects. Julie Smith, DuPont group manager and global automation and process control leader at the Engineering Technology Center, offers insights gained over many decades. Julie, has your team seen value in this use case?
Julie: Yes, we’ve been using simulation to assist migration projects for more than 30 years. It’s important to note that migration projects can be extremely challenging to execute well. There’s little return on investment, and a lot of risk to operations. Project teams are under the gun to cutover from their old system to the new one during the plant’s normal maintenance turnaround. This is often only a few weeks, so there isn’t much margin for error.
Get your subscription to Control's tri-weekly newsletter.
Project teams typically try to mitigate this risk with a like-for-like approach to design or just do a copy job. However, there’s no such thing. The new system will not only look and feel different, but the execution engine will also behave differently, particularly if the legacy system is more than 20 years old. Execution cycles will be faster, and there will be more tasks done in parallel rather than sequentially. There may also be process safety implications because grandfathered legacy safety systems now need modifications to meet current standards. These risks can be addressed by proper simulation and testing before the cutover.
Greg: How can simulation help?
Julie: A simulation is a virtual replica of a control system connected to a virtual process. It’s a digital twin. The virtual plant must have the same dynamic response as the real plant, making it difficult for operators to discern the difference. You want people to question if it is real. It creates buy-in for use of the tool. Once you have buy-in, people want to use it, and they often try scenarios they’d never attempt otherwise.
Greg: How do you get that level of fidelity?
Julie: Finding the right level of detail is always a challenge. I recommend using first-principles chemical engineering models of all unit operations and chemical components in scope. Incorporate the basic physical properties, reaction kinetics and thermodynamics. Start simple, build incrementally, and add fidelity only as you need it. For example, assume the reactor is perfectly mixed, and that valve stiction is negligible. Compare the model response to actual plant data, and adjust as needed to match reality. It will not only speed up simulation development, but also make it easier for others to modify it in the future.
Greg: In my experience, modeling a process using a first-principles model, with all the thermodynamic calculations and material balances and energy balances and reaction kinetics, requires significant process knowledge. The process experts at the site don’t necessarily have the time or skills to develop models. How do you address that issue?
Julie: We address it in two ways. First, the modeling tool must be easy to use. We maintained an internal tool for exactly this reason. Everything is object-oriented, so the user doesn’t need to worry about writing or solving differential equations. We also have a small team of internal experts, who can translate plant requirements into model requirements, allowing plant experts to describe desired behaviors at a high level. Then, we create the model behind the scenes. It’s a joint effort, but our team does the heavy lifting.
Greg: Collaboration with the plant to build the model is important. How do you use it to help the migration project?
Julie: At a recent site, we worked with the system integrator doing the coding to incorporate our models into their internal testing before factory acceptance testing (FAT). It was challenging at first because the integrator didn’t want to deviate from their typical project plan, which had time allotted to create a tieback simulation. We started small with an isolated area of the plant that had little interaction with other areas but high hazards. We showed that model-based testing gave a deeper and more thorough level of checkout, and identified instances where process safety gaps would have been left unmitigated.
Next, we moved to the main process areas. It was a batch process, with multiple reactors in parallel and highly exothermic reactions. We simulated and tested more than 100 batches offline using our models. The results were amazing. We documented $4.8 million in hard savings via avoided shutdown time and waste-disposal costs. Demands on the safety system would also have cost about $3 million.
The following year, another process area was scheduled for cutover. We reused our models, and the integrator reused the corrected logic instead of going through their standard reverse engineering process. We not only saved another $3.8 million in yield losses and waste-disposal costs, but also returned the asset to production three days early. How many migration projects can say that?
This year, we had the final phase, which included the thermal oxidizer (tox) for the plant. It’s not only a critical environmental control device, but also affects every operating area on site. If the tox is down, nobody runs. By simulating the process and validation checks ahead of time, we saved another $6.5 million and started up four days early.
Greg: What makes the high-fidelity model so much more effective than the simple tieback version?
Julie: A tieback model is fine if you’re only concerned with discrete manufacturing, or similar processes without a dynamic component, and with processes with simple operations and low risks. Once you combine reaction chemistry and process safety, you must be concerned with abnormal handling, particularly for batch processes. A tieback model never veers off the happy path. Everything always goes to its commanded state. Real plants do deviate sometimes, and the consequences can be severe. How will the deviation be detected? Can the layers of protection catch it in time? How will the unit recover? These questions are best answered by a high- fidelity, dynamic model.
Greg: All processes I worked with are complex and challenging, and they’re often pushed beyond their original design. High-fidelity, dynamic models are the key to finding and addressing many issues, and taking advantage of the improvements in process control technology offered by migration to the latest control systems. These simulations are essential for best batch profile, feedforward, ratio, override and state-based control to deal with upsets. This makes recovery faster and safer from abnormal operation, avoiding the need for actions by safety instrumented systems (SIS). Simulations are also critical for developing procedure automation, inferential measurements, model predictive control and real-time optimization. Recent advances in external-reset feedback have enabled much better override control and dead-time compensation, and an enhanced PID that can greatly improve control system performance using analyzers and wireless devices. This is noted in the ISA TR5.9-2023, “Proportional-Integral-Derivative (PID) Algorithms and Performance” technical report, and in articles by me and Peter Morgan. Simulations can prevent benefits from deteriorating, and find new benefits as operating conditions change, fostering creativity from the learning experience.
You can start by focusing on higher-fidelity simulations for unit operations that have complex dynamics and pose the greatest risk to plant safety. These include environment and process performance, such as bioreactors, chemical reactors, compressors, columns and neutralizers. I include dynamics from instrumentation, which are particularly important for composition, pH, pressure, surge and temperature control.
The following, recent ISA books detail the use and value of simulations with breakthroughs. Use promo code ISAGM10 for a 10% discount on Greg’s ISA books: