Beyond simply moving faster, data analytics simplifies and shortens distances to processing data and making better decisions.
“Data analytics isn’t difficult; it’s pretty easy and straightforward. It’s just that today we have a lot more data from a lot more sources, but what we do with it and the metrics we use are still the same,” says John Clemons, MES solutions consultant at Maverick Technologies, a system integrator and Rockwell Automation company. “Users still want to do everything better, faster and cheaper. They want to increase first-pass yields and productivity, reduce costs, and improve quality, output and margins. However, with more data and sources, they can begin to see interactions they couldn’t see before—even though it’s difficult to separate the wheat from the chaff.”
Brian Bolton, consultant specializing in Aveva PI applications at Maverick, reports that more information data and useful analytics gain added value as they’re made available across all departments in an organization. “For example, if the quality department can see production and finance data at the same time, they can determine what quality costs more precisely,” explains Bolton. “Or, if process data is combined with laboratory information management system (LIMS) software on quality, which weren’t combined electronically before, they can now be pulled into a historian and allow cross-platform reporting. This lets users slice and dice data by cost, and find facts like 20% of their products are contributing 80% of their profits, or learn which products aren’t profitable and adjust prices or stop selling them.”
Add value with analytics
Because users can generate data about making their products closer to perfect, Bolton states they can also sell related information and services to add more value to existing products. This can include identifying break-even points or determining if an item should be sold as topline or midrange. “We work in the world of process historians, but now we can visualize everything,” adds Bolton. “The glory of HTML5 applications and software like Aveva’s PI Vision is that we can see information on any device, set up links to pull production and LIMS data, and perform analytics on them. This also lets us look at historian and live data together, so we can compare batches during every cycle to achieve greater consistency.”
Bolton acknowledges that many of these analytical functions were performed in the past, but users wielding clipboards, tracing paper and Excel spreadsheets typically didn’t have the time to mix and compare pieces of information coming from production, LIMS and other systems. “Fifteen years ago, we’d see a 200-300 centipoise range for a base resin precursor in plywood manufacturing, and if this enabled a 1,100-1,300 viscosity range, it would be no problem to make the product,” explains Bolton. “However, if viscosity could be maintained at 1,100, it would be easier to set up the equipment, and save material, time and adjustments. Now, we have historians and DCS software with data analytics that can let users know more about the attributes of their resin, and fully automate and monitor their batch process, so all they have to do is check a sample at the end.
“Veteran users worry that no one will know how to perform these tasks manually in the future, but data analytics can supplement what they know. By linking inline viscometers and pH meters to data analytics, we can see curves and trends immediately, know what’s happening as a batch is produced, maybe make adjustments on the fly, and prevent off-spec and bad batches before they occur.”
Skid to code to cloud
For instance, Clemons reports that one of Maverick’s clients is a skid builder that uses its controls, including a historian built into its onboard, edge-computing system. The historian connects to the user’s enterprise network and cloud-computing service twice a week to download all the data it’s collected. “This is a very simplified, plug-and-play system that was gradually developed, but it’s made each of their jobs a little easier,” says Clemons. “This builder previously used DOS and C++ software to gather and analyze data, but they don’t need it anymore. Now, instead of writing code from scratch, they can just grab the code they need from Github and import it, and add their own production numbers, which drastically reduces their programming time.”
Clemons adds the key to today’s data analytics is enabling LIMS and other systems to talk to whatever and whoever needs them. This includes allowing PI software to communicate via OPC UA and other protocols, and letting text files from machines be easily converted and added to applicable databases.
“If we can acquire and coordinate formerly disparate data, we can learn enough about what’s going on with processes to answer deeper ‘why’ questions about production, computerized maintenance management systems (CMMS) and materials from their suppliers. For example, if a final product is off spec, users can more easily identify problems with particular raw materials or cleaning and maintenance functions,” adds Clemons. “This is classic multivariate analysis, but it shows the value of bringing together data sources that weren’t together, and learning what we couldn’t know before. There isn’t just one dataset. There are different datasets for production, quality, suppliers, materials, maintenance and people. The intersection of these datasets is where we get the real insights that we couldn’t get until now.”