Heath Stephens, PE, is the digitalization leader at Hargrove Controls & Automation in Mobile, Ala. He began working with process controls 30 years ago in the specialty chemicals industry, including installing historians at multiple sites with distributed control systems (DCS) and control rooms where users could pull up predefined trends. Hargrove is a division of Hargrove Engineers & Constructors and a certified member of the Control System Integrators Association (CSIA).
“Over that time, they transitioned from clipboards and logbooks to desktop PCs in the 1990s and OSI PI software in the 2000s. Since then, data became cheaper to get, so users expect to pull it from thousands of tags and easily get it to engineering stations,” says Stephens. “However, there’s far more information collected now and fewer staff to handle it. As a result, many users are drowning in sea of data, and only identifying trends if they have time.”
Stephens reports that Hargrove previously used AutoCAD and P&ID tools for process engineering, and gave its clients binders of documents and PDFs of specification sheets and other reports. “In the past, we had access to information that we could download from E&I departments, but now we’re building 3D models of projects that clients can view on goggles,” says Stephens. “Data presently comes over as design databases and engineering digital twins, and these are used to seed control systems. Instead of using Excel spreadsheets and content that must wait to be organized and published, clients and their controls group get data that’s more accurate, up-to-date and has a single point of truth. This is going to grow even more in the next 5-10 years, and will be how design and engineering companies work. Downstream operations will deploy from digital twins that can propagate throughout their systems, and show users how changes will affect their processes.”
AI gains ground and acceptance
Stephens reports a primary solution for the more information/fewer people problem is using artificial intelligence (AI), which is becoming increasingly prevalent in analytics software to automatically correlate many types of input, and make recommendations for optimizing processes.
“Historically, most graphing and trending tools, but especially Excel table formatting, were very much do-it-yourself (DIY). They basically said, ‘Here’s all the data,’ but we don’t have time for that anymore,” says Stephens. “Adding AI to data analytics is kind of like having a summer intern, who comes in with little experience, but has the bandwidth for tasks regular users doesn’t have time for. It can spot patterns in energy use, equipment health, product quality and other situations, and ask users if they want to address them.”
Beyond integrating AI, Stephens reports that data analytics software is also upgrading with web browsing that makes graphs and trends more portable, and putting data into better contexts to make it more meaningful and useful for users. “Previously, trends were organized using tag names, for example, which may or may not give clues to the role in the process. Now, they can be put into context according to the type of equipment, and we can seek values for each,” says Stephens. “Likewise, software can organize devices and their data into containers and frameworks, and assign tags when needed. This is the same remedy we sought back on the frontier, but it’s less DIY now, and lets us develop digital twins, design data requirements, tally those designs, and create 3D models—as well as put some data back into the controls, and pull results from the historian later.”
Telling twins apart
Even though digital twins can assist analytics, Stephens cautions this method is still in its early stages, and there are many types of models to evaluate and coordinate because there isn’t a unified strategy so far. “I like to tell people there isn’t one digital twin with common software and consistent timing,” says Stephens. “There are engineering digital twins with design data. There are digital twins to simulate operations, which are mostly individually customized for specific purposes. We’re not at one, big holistic thing yet, which could can handle massively complex views of processes, and let users do experiments to show what would really happen in the field. There are still many dynamics and physics that aren’t well understood enough, and they need more accurate digital twins to emulate the physical world."
Despite these limitations, system integrators and users can still act to use digital twins or other models in conjunction with data analytics to optimize operations and save revenue. “Advanced process control (APC) has been available for 20 years, and its strategies can be used to develop digital twins,” adds Stephens. “This means talking to clients, and learning which pain points are keeping them from meeting their performance goals. The difference today is that many APC tools are less costly and complex. Plus, they’re not just for refineries, and can be used by any continuous chemical process. We also have more predictive reliability, maintenance and quality solutions that can provide good insights. These include available reliability, availability and maintainability (RAM) modeling tools that do Monte Carlo analyses for random variables to calculate overall equipment effectiveness (OEE), pinpoint weak spots in production, and indicate where to invest to remove bottlenecks. More users are also adopting cloud computing services and other IT-based solutions to aid their operations. For example, we’re seeing a big push toward low-code/no-code software that gives users standard templates for devices like valves and motors, and lets them fill in values for their individual processes.”