Virtual plant virtuosity

Aug. 7, 2017
Maximizing operator and system synergy for best plant performance

It is well known that the operator can make or break a system. A recent research study found that only 2% of process losses and safety issues could be traced to something other than human error. Operator training is recognized as essential. Not realized is that many, if not most, operator errors could have been prevented by better operator interface and alarm management systems, state-based control, control systems that stay in the highest mode and prevent activation of safety instrumented systems (SIS), continual training and improvement from knowledge gained, and better two-way communication between operators and everyone responsible for system integrity and performance.

Where are we?

We have an increasing disparity between what is potentially possible and what is actually done in terms of achieving the best plant performance. This disparity is due to the retirement of expertise without mentoring successors or even documenting to any significant degree the knowledge gained over decades of plant experience. This was partly due to the practitioners being very busy doing the actual work, and not having a marketing background, being without experience or motivation for presenting and publishing. In “Getting the best APC team,” Vikash Sanghani explains how nearly all the expertise of a major supplier of model predictive control (MPC) software, including comprehensive engineering of applications, dwindled to near zero. Thinking that their customers were going to develop in-house expertise, MPC suppliers did not try to keep or replenish their experts, when the opposite occurred.

The other side of the incredible gap between what is done and possible is the order of magnitude improvement in the performance of instrumentation, control valves, controllers, MPC software, tuning software and analytics software, and the profound increase in knowledge we have documented in ISA standards and technical reports. ISA members can now view these standards and reports online for free. Transmitter and valve diagnostics, a vast spectrum of function blocks, and advanced control and modeling tools from more than a hundred engineering years of development can be configured in a matter of minutes.

The main limits are the imagination of the practitioner and the management of risk. Unfortunately, users are not given the training, mentoring, and—most importantly—the time to develop the imagination of possibilities and the automation of risk management needed. In fact, the user has negative free time due to an overload of responsibilities in executing and managing projects.

Migration projects frequently end up being largely copy jobs based on a process and instrument diagram (P&ID) and process flow diagram (PFD) that are often more than a decade old. It may be expected that the Hazard and Operability Study (HAZOP) will catch changes needed to manage risk, and that improvements will evolve over time. But the benefits of more precise and timely measurements and valves, as well as better control strategies, often are not considered because there is no understanding of the benefits. Not fully appreciated is the consequence today of the lack of experimentation with the actual plant to develop innovations to eliminate potential operator errors. Concerns about loss of production, validation and regulatory compliance, lack of the prerequisite knowledge, and the unfamiliarity with the potential benefits typically prevent deviations from standard operation. All of these issues can be addressed by the use of a virtual plant before, during, and after the project. The deep knowledge gained by fast and free exploration with the virtual plant can result in a focused and trusted Design of Experiments (DOE) and dynamic response tests for confirmation of virtual plant results and implementation of more intelligent alarm, operator interface, and control systems.

Today’s dynamic simulation software has in many cases achieved the same fidelity of steady-state simulators used for process design by incorporating physical property packages, equipment details, and the first principles from the ordinary differential equations (ODE) for material, energy and component balances, supplemented by charge balances for pH. Not as appreciated but highly significant is that modeling blocks have recently been added that can simulate automation system dynamics such as sensor, transmitter, analyzer, control valve and variable-speed drive dynamics that significantly affect the performance and tuning of most loops. Furthermore, most of the full potential of the opportunities offered by dynamic simulation can be achieved by connecting the actual configuration and displays of the control system and operator interface to the dynamic model in a virtual mode to create a virtual plant, eliminating the need for emulation and translation of complex, proprietary capabilities and the associated uncertainty. Figure 1 shows the virtualization of an actual control system, displays, alarms, historian and tools.

Most of the phenomenal capability of today’s systems for modeling and control remains untapped. Unfortunately, the loss of expertise extends to management, who in previous decades had advanced from doing applications to managing groups. Today, in many process companies, there may not even be a process modeling and control group, let alone anyone left who understands what is lost and its value.

Where we were

In an Engineering Technology department that evolved into a Process Control Improvement (PCI) group of a leading chemical company, the culture was to use modeling not only for the traditional process design and operator training, but also for improving the control system. Modeling and control were synonymous. The best control involved taking the best process, automation system and operator knowledge and implementing the essential aspects into a control system. Nearly all of the accomplishments were the result of deep knowledge gained from dynamic simulation.

[sidebar id =1]

Until the advent of the distributed control system (DCS) for anything more than simple PID control, we had to buy, install, wire and manually adjust separate analog control modules. This did not stop us. In many ways, the creativity in process control was greater, as seen in Shinskey’s books. The dynamic models we used for developing process and control system knowledge were the result of programming the ODE in Continuous System Modeling Program (CSMP) and then Advanced Control Simulation Language (ACSL), where they would be integrated and the results printed out and plotted. The control system functionality, including the PID controller with all of its many features, had to be emulated in code. Eventually, these models, generalized as FORTRAN subroutines, were interfaced to the inputs and outputs of the actual hardware of the DCS.

We used steady-state simulations to get us the right operating conditions and flows. Employing these as the starting points, we worked on getting the right transient response. Due to the extraordinary effort required in setting up the ODE, dynamic models tended to be small and focused on a unit operation.

We found that the new PFD simulation software, advertised to be able to be switched from a steady-state mode to a dynamic mode, did not work as intended, and we had to rebuild from scratch the models in the dynamic mode. The matrix for the pressure flow solver often developed fatal problems, shutting down the program with no diagnosis of fixes, particularly when we tried to go more plant-wide.

Despite the limitations in simulation software, advanced PID control (APC), model predictive control (MPC), and real-time optimization (RTO) thrived. A large part of the credit for the success of APC is due to Greg Shinskey, and the success of MPC and RTO is due to Charlie Cutler, who invented Dynamic Matrix Control. Since I was more into APC, I particularly appreciated the incredible spectrum of process control opportunities based on fundamental understanding of the process and the PID that Greg Shinskey developed and published. Look for a special tribute to Shinskey in the October issue of Control, written with Sigifredo Nino, Shinskey’s protégé, and with all of Shinskey’s books and many of his articles highlighted.

Where could we be?

Not well recognized is that the dynamic models often used for training operators as part of an automation project have a much wider utility that today is more important than ever.

So how do we address this increasing concern and missed opportunity?

The solution is to foster process modeling and control to maximize the synergy between operators, process control engineers, and the control systems. To start on this path, process control engineers need to be given the time to learn and use a virtual plant, and set up online metrics for process capacity and efficiency. The virtual plant offers flexible and fast exploring=> discovering => prototyping => testing => justifying => deploying => testing => training => commissioning => maintaining => troubleshooting => auditing => continuous improvement showing the “before” and “after” benefits of solutions from online metrics. Examples for major unit operations are given in the accompanying appendix, “Virtual plant in practice.”

First, we need to break the paradigm that you need to run a steady-state model for your cases, and the dynamic models are built to just show dynamic response and not process relationships. With dynamic models today having nearly the same fidelity as steady-state models, this is no longer true, which is fortunate for many reasons. With steady-state models, we were only able to find the process gain for self-regulating processes, and even here, these models were clueless as to the open-loop gain, including valve gain and measurement gain. The resulting relative gain matrix from process gains is helpful for pairing controlled variables and manipulated variables, especially for two-point concentration control in columns. However, the degree of interaction and the best decoupling is also determined by dynamics. Interaction between two loops can be minimized by making a faster loop faster and/or the slower loop slower. This is the basis of cascade control, where we want the secondary loop to be at least five times faster than the primary loop to prevent interaction between these loops. Also, the dynamic decoupling achieved inherently by MPC, or by dynamic compensation of feedforward signals acting as decouplers for APC, is based on dynamics identified. Thus, a dynamic model can tell you more about how to deal with interactions and what are the achievable performance metrics, and what the control system needs to do as the process moves to a new steady state.

Furthermore, steady states do not exist in integrating and runaway processes, nor in batch operations, transitions, startups and abnormal conditions. Most of the problems with plant operation can be traced back to these types of processes because of the abrupt changes, ramps and oscillations that result from the tuning of loops, sequences, manual actions and the lack of procedural automation and state-based control, as detailed in the presentation, “Most disturbing disturbances are self-inflicted.”

Dynamic models can find steady-state conditions and process gains, but you are relying upon the control system to achieve the new steady state, kind of like the actual plant. For very slow processes such as distillation columns and plantwide operation, steady-state models may be more useful to find the process gains and more optimum operating points, unless the dynamic models can be sped up.

In this time period, the capability of dynamic models to improve system performance has greatly increased, even though their use has focused mostly on training operators as an automation project nears completion. The virtual plant should detail the tasks needed for difficult situations from the best operator practices and process knowledge, and eliminate the need for special operator actions through state-based control. APC and MPC can deal with disturbances and address constraints intelligently, continually, automatically and with repeatability. Compare this with what an operator can do in terms of constant attention, deep knowledge and timely predictive corrections considering deadtime and multivariable situations. Some operators may do well, but this is not carried over to all operators. Then, of course, any operator can have a bad day.

Automation enables continuous improvement and recognition of abnormal conditions by a much more consistent operation. A better understanding by the operator of control system functionality and process performance from online metrics makes far less likely the disruptions caused by an operator taking a control system out of its highest mode and/or making changes in flows. Furthermore, procedural automation can eliminate manual operations during startup, when risk is the greatest compared to steady-state operation.

It is important to note that while we have singled out operators and process control engineers in this article, the need for knowledge to attain the best performance extends to maintenance technicians, process engineers, mechanical engineers and information technology specialists. Just think what can be realized if we were all on the same page, understanding the process and operational opportunities, and the value of the best measurements, valves, controllers and software.

Let’s highlight some of the many potential uses of a virtual plant.

Cause-and-effect relationships: Data analytics can point to many possibilities, but the relationships identified are correlations and not necessarily cause and effect. Also, the data sets used in data analytics are limited to the range of plant operation, which by design, may not show the changes possible. Process control is all about changes. We saw this from the very beginning in our first course on control theory where the all the variables for Laplace transforms and frequency response were deviation variables. We need to see the effect of changes in process inputs on changes in process outputs. A virtual plant can make all the changes that you cannot make in a real plant. Your imagination is the only limit. The resulting data can verify correlations, provide richer datasets for data analytics, identify the dynamic compensation needed in data analytics for the predictions by Projection to Latent Structures, also known as Partial Least Squares (PLS), and provide a much leaner, faster, safer and more focused Design of Experiments (DOE) for the actual plant. Root cause analysis can be greatly improved.

Interactions and resonance: The degree of interaction and resonance depend upon dynamics, tuning and oscillation periods, as detailed in my “Disturbing” presentation linked above. If the process and automation system deadtimes are modeled, the virtual plant can show all of this and much more using Power Spectrum analyzers to track down the culprit.

Valve and sensor response: The largest source of deadtime and oscillations can be traced back in most loops to the valve and sensor response. In particular, most of the valves designed for high capacity and tight shutoff not only have a poor response time, but will also cause continuous limit cycles from stiction and backlash, as detailed in “How to specify valves and positioners that do not compromise control.” Showing the effect of slow or noisy sensors and on-off valves posing as throttling valves can provide the justification for getting the best measurements and valves. You can show the effect of sensor and valve failures, and the value of redundancy and a smooth recovery.

Process safety stewardship: The virtual plant should be used for extensive and continual testing of all the layers of protection in addition to the detailed design and implementation of the safety instrumented system (SIS). The interplay and performance of layers should be thoroughly evaluated and documented for all of the important scenarios. The speed of measurements and valves is often not thoroughly understood, and the consequences of failures should be analyzed in these scenarios.

Control system and SIS knowledge: One of the most difficult aspects of control system and SIS design is recognizing the implications of measurement and final control element repeatability, rangeability, reliability, resolution and response time, as discussed in the 9/9/2015 Control Talk blog, “What is truly important for measurements and valves.” The virtual plant enables you to explore and find the value of the best instrumentation, and best PID and MPC control including the solutions and parameters needed. You need to see how well the systems play together and help the operator during abnormal conditions. You can try all kinds of “what if” scenarios to see how the system and operator perform.

Validation and regulatory support: The validation and regulatory compliance of automation systems in pharmaceutical production requires a large expenditure of time and expertise in automation projects. The virtual plant can offer the verification of performance needed for confirmation and documentation. The practice of using the virtual plant for control system validation support has an incredible return on investment in terms of the savings in the time, cost and effectiveness of the resulting systems.

Process and equipment knowledge: Most of our deep understanding of the implications of process and mechanical designs has come from running dynamic models. This was particularly true for specialties of pH and compressor control. For pH control, the extraordinary sensitivity and rangeability, and consequential incredible nonlinearity of a system of strong acids and bases, leads to strict design requirements in terms of minimizing mixing and injection deadtime, and seeking better setpoints as well as the possibility of using weak acids or weak bases and conjugate salts to help reduce the steepness and nonlinearity. Dynamic simulation is essential to understand and develop all aspects of the pH and surge solutions.

Today, we have the ability to handle conjugate salts as well as a wide spectrum of weak acids and weak bases by means of an expanded charge balance equation, as detailed in “Improve pH control.” For compressor control, dynamic models of surge can detail the incredibly fast speed of response requirements for instruments, valves and controllers. Also, the design can be developed for the open-loop backup to prevent and recover from surge. The open-loop backup can predict the crossing of the surge curve by computing a future process variable value, and detect the start of surge by the precipitous drop in compressor suction flow.

Process equipment degradation: The consequences of fouling of heat transfer surfaces, thermowells and electrodes, loss of catalyst activity and plugging of filters can be studied and more optimum times for maintenance and cleaning scheduled. Better installation practices and redundancy of measurements can be justified. For pH electrodes, a velocity of greater than 5 fps but less than 10 fps provides the best reduction of fouling, and using three electrodes with middle signal selection of different vintages will provide the best protection against fouling and the associated dramatic increase in response time.

Startups, transitions, shutdowns and batch operation: The design and value of procedural automation and state-based control for continuous processes is best achieved by trying out all conceivable scenarios in a virtual plant. The virtual plant is also the key for optimizing batch profiles, endpoints and cycle times. If an operator claims his actions cannot be automated, there is an even a greater motivation for a virtual plant to find and test the best actions. The automated, repeatable best actions reveal the other sources of changes by eliminating the variability from operator response.

Optimum operating points: Finally, you can find and document the benefits of better setpoints and achieve these setpoints more reliability and extensively by feedforward control, override control, batch profile control, full throttle batch control and model predictive control.

[sidebar id =2]

Figure 2 shows the functional value of a virtual plant, highlighting the bidirectional flows of control system and process/equipment knowledge, including online performance metrics for greater analysis and justification of improvements. The two-way knowledge flow is the key to improving the process/equipment and the control system, as well as the dynamic model and data analytics. Also shown is the bidirectional flow of information between the embedded modeling and advanced control tools. As the fidelity of the dynamic model increases, opportunities open up for these tools to get results from the virtual plant that can be used in the actual plant. The dynamic model can be run faster than real time with the tuning corrected by applying the speedup factor to the process time constants and integrating process gains. New control functionality can be developed and included in the dynamic model for evaluation. If online metrics show significant improvements in control and process performance, the functionality prototyped can be added as new blocks or as improvements to existing blocks in the DCS.

Better understanding of dynamic model fidelity is needed to meet your objectives. The classification of dynamic models as being just low- or high-fidelity is counterproductive. The Knowledge Base White Paper by Martin Berutti of MYNAH Technologies, “Understanding and Applying Simulation Fidelity.” describes four levels of fidelity, and Tip #98: Achieve the Required Simulation Fidelity in the ISA Book by Greg McMillan and Hunter Vegas, “101 Tips for a Success Automation Career” offers five levels of fidelity.

Note that when the virtual plant has the same control system and setpoints as the actual system, the steady-state fidelity, where the virtual plant matches the actual plant, moves from the process variables that are controlled variables to the manipulated variables (typically process and utility flows). The dynamic fidelity translates to how well the transient response for load disturbances and setpoint changes match up when the actual and virtual plant PID have the same tuning settings and MPC have the same models. The dynamic process model can be adapted online by an adaptive MPC that has the manipulated flows from the actual plant as the MPC targets, the manipulated flows from the virtual plant as the MPC controlled variables, and dynamic model parameters as the MPC manipulated variables. Identifications of the models for the effect of manipulated variables on controlled variables for the adaptive MPC are achieved by running automated tests of the virtual plant independent of the actual plant. The feature article, “Virtual Control of Real pH” shows how this was done for a critical plant waste treatment system to move from Fuzzy Logic Control to MPC and PID control.

There is not going to be another Shinskey, but we will be doing our best to continue to discover and convey how process and control system knowledge can maximize the synergy between the operator and the control system. To do this, we will show how to exploit the virtual plant in the 4th edition ISA book, “Advanced pH Measurement and Control,” 2nd edition of my ISA book, “Advances in Bioprocess Modeling and Control,” and 6th edition of my McGaw-Hill handbook “Process/Industrial Instruments and Control Systems Handbook - Guide to Best Return on Investment of Automation in Process Industries.” I am seeking to provide flexible online distance courses based on these books and demos.

Summary

We have a wealth of information in terms of ISA books, standards and technical reports, and supplier white papers and books online. We have incredible untapped capability in today’s measurements, control valves, controllers and software. What we are missing is the recognition of what this knowledge and systems can do to increase the performance of the operator and plant. The virtual plant can reveal the opportunities, detail the solutions, and identify and achieve the benefits that can lead to a rejuvenation of our profession providing the missing motivation and education. The powerful capability of new blocks and modeling objects enables the average process control engineer to fully exploit the opportunities, and give the knowledge gained to all of the personnel running and supporting the plant.

[sidebar id =3]

Appendix - Virtual Plant in Practice

Here are four examples of how the Virtual Plant has shown its technical fluency and skill in addressing key process control improvement opportunities in pH, chemical reactor, bioreactor and compressor control. The key roles are emphasized in exploring=> discovering => prototyping => testing => justifying => deploying => testing => training => commissioning => maintaining => troubleshooting => auditing => continuous improving, showing the “before” and “after” benefits of solutions from online metrics.

Exploring and discovering

In the Virtual Plant, we try out all scenarios that can be imagined by operators, maintenance technicians and the engineers, statisticians, and research scientists who support and improve the process. We discover what is important and needs further attention.

In pH control, the acid, base and salt concentrations determine the slope of the titration curve and thus the process gain and the rangeability of the reagent flow. For strong acids and strong bases, the process gain can range from 10 to 100,000, and can theoretically change by an order of magnitude for each pH unit deviation from neutrality. If we identify the titration curve, we can see how it changes with operating conditions, which determines control system design. Weak acids and weak bases, conjugate salts, and carbon dioxide absorption have a profound effect. The extraordinary sensitivity and nonlinearity of pH creates special challenges in total system design. The value and key aspects of equipment, piping, measurement and reagent delivery design can be detailed. The best setpoint for control and reagent minimization also can be found.

In chemical reactor control, there must a slight excess of each reactant so that no reactant is depleted and stops the reaction. Changes in the production rate must be achieved by a coordinated, synchronized addition of reactants to prevent an unbalance of reactant concentrations from accumulating. If there is a recycle of reactants recovered, there can be a “snowballing effect” that must be addressed in the control of the recycle. The use of reactant concentration analyzers can greatly improve control, but introduces a large deadtime via the cycle time. The feed rate can be cautiously maximized.

In bioreactor control, the batch cycle time can be days to weeks. A bad batch of new biologics can result in a loss of $10 million to $100 million or more in revenue. The profile of cell growth and product formation is not controlled, and often is not even measured. Most of the new biologics use mammalian cells that are extremely sensitive to pH and temperature, and significantly affected by inhibitors and the ratio of the food (glucose) to amino acid (glutamine) concentration. At-line analyzers can offer accurate measurements of cell concentrations, glucose, glutamine and inhibitors. We can find the opportunities to optimize pH, temperature, glucose and glutamine setpoints, and the tightness of control. We are also able to predict and diagnose the root causes of bad batches, enabling the problem to be addressed and the batches to be aborted or saved.

In compressor control, going into surge is like climbing up a mountain, suddenly going over the edge and falling off a cliff. Surge is characterized by a precipitous drop in flow that can occur in less than 0.03 second. The subsequent oscillations are extremely fast (1-2 seconds). Successive surge cycles cause a loss in compressor efficiency. Conventional controller design is too slow and confused by the nonlinear, repetitive, fast oscillations. The surge curve can be identified as a function of operating conditions and the surge predicted and detected online. Control systems using standard function blocks in a DCS can be used if properly designed. The compressor discharge pressure can be optimized to reduce energy use.

Prototyping and testing

We use the Virtual Plant to creatively employ all the advances in automation and process control technology to implement best possible innovative solutions for evaluation and improvement. The entire functionality and operability of the actual plant and automation system come into play. Extensive testing of concepts is completed before moving on to the next steps.

In difficult pH control systems, potential solutions involve everything in the plant associated with pH control. More than any other loop, total knowledge of the plant is needed. Signal characterization, as detailed in the October 2015 Control Talk blog, “Unexpected benefits of signal characterizers” may be used to linearize the controlled variable by going from pH to linear reagent demand control. Control valves can be specified with extraordinary precision and rangeability, going as necessary to smart simultaneous throttling of coarse and trim valves. The minimization of deadtime in equipment, piping, injection and automation system design is detailed. The effects of electrode coating and failure, and the noise and spikes often seen in pH systems is addressed by middle signal selection. The concern of the window of allowable gains is eliminated by going from single-loop vessel pH control manipulating reagent flow to a cascade control system where a primary vessel pH controller manipulates a secondary inline pH controller setpoint, as detailed in the August 2016 Control Talk blog, “Secrets to Good Vessel Temperature and pH Control.”

For chemical reactor control, the same concerns of a window of allowable gains are addressed, again, by cascade control. In this case, the secondary loop is often a jacket inlet or outlet temperature controller. The concern is greatly heightened for highly exothermic reactors because of the potential of a runaway response and the possibility that the window of allowable gains may close due to too large of a heat transfer surface or thermowell lag. We look at how the measurement location and type (including analyzers) and the control strategy improves the ability to minimize wasted reactants and undesirable side reactions. For the control strategy, we look at ratio control and the use of equal signal filter times on all the reactant feed setpoints to eliminate unbalances. We see how to prevent snowballing from recycle streams by going to discharge flow control with level control manipulating total reactant feed flow, residence time control by an intelligent level setpoint, makeup reactant flow computed from recycle flow, the use of a PID designed for wireless to deal with at-line analyzer cycle time, and, finally, the maximization of feed rate by a valve position controller looking at constraints.

For bioreactor control, the PID structure and tuning is focused on eliminating overshoot of temperature and pH setpoints. The accuracy and reliability of the temperature and pH measurements is foremost because of the extreme sensitivity of mammalian cells. The optimum setpoints can be evaluated by looking at growth rates and product formation rates in the Virtual Plant. Trials can be made using a deadtime block to compute rate of change of key process variables for profile slope control and endpoint prediction. Optimization of profile slopes by manipulation of the glucose-to-glutamine ratio can be exemplified. Prevention of unnecessary crossings of the split-range point using an enhanced PID with directional move suppression can be shown as a way of reducing cell osmotic pressure from sodium bicarbonate accumulation. Prevention of the accumulation of possible inhibitors can be studied.

For compressor control, the aspects of impending surge prediction and actual surge detection using a deadtime block to create a suction flow rate of change can be detailed. The benefit and implementation details of an open-loop backup can be trialed. The control of stages in series and compressors in parallel can be detailed. The required speed of measurements, valves and controllers can be determined. Finally, minimizing the compressor discharge setpoint and energy by use of a valve position controller via an enhanced PID with directional move suppression can be demonstrated.

Justifying and deploying

Online metrics can be used to document the “before” and “after” improvements in process performance. To better understand how to effectively configure and use metrics, see the July 2017 Control Talk blog, “Insights to Process and Loop Performance.” The details of the implementation are captured by the download of Virtual Plant configurations and displays into the testing and training system.

Testing and training

Extensive testing of the actual implementation is used to further evaluate all possibilities. Training of operators and all personnel responsible for performance is done with possible improvements sought and incorporated in the final solution. The orders-of-magnitude better understanding and communication by seeing the Virtual Plant in operation for various scenarios is fully exploited. The additional intelligence in commissioning identified to minimize disruption is employed. The “before” and “after” metrics are carefully monitored and improvements accordingly made.

Maintaining and troubleshooting

How instruments and the plant behave due to instruments and valves that were not the best selection or did not use the best installation are identified by the simulation of abnormal conditions. The knowledge gained during testing and training is effectively used and increased. The results of data analytics software can be studied for cause and effect, extraneous inputs eliminated and dynamic compensation of inputs for continuous processes identified.

Auditing and continuous improvement

After the innovations have been in service for a representative period of time, increases in process capacity or efficiency are documented by intelligent evaluation. Continuous improvement is facilitated and verified by running test cases in the Virtual Plant as experience is gained and the knowledge base builds.

Resources

About the Author

Greg McMillan | Columnist

Greg K. McMillan captures the wisdom of talented leaders in process control and adds his perspective based on more than 50 years of experience, cartoons by Ted Williams and Top 10 lists.

Sponsored Recommendations

2024 Industry Trends | Oil & Gas

We sit down with our Industry Marketing Manager, Mark Thomas to find out what is trending in Oil & Gas in 2024. Not only that, but we discuss how Endress+Hau...

Level Measurement in Water and Waste Water Lift Stations

Condensation, build up, obstructions and silt can cause difficulties in making reliable level measurements in lift station wet wells. New trends in low cost radar units solve ...

Temperature Transmitters | The Perfect Fit for Your Measuring Point

Our video introduces you to the three most important selection criteria to help you choose the right temperature transmitter for your application. We also ta...

2024 Industry Trends | Gas & LNG

We sit down with our Industry Marketing Manager, Cesar Martinez, to find out what is trending in Gas & LNG in 2024. Not only that, but we discuss how Endress...