After accessing and relaying data via traditional methods that rely mostly on hardware, some users are beginning to realize they’ll soon need far more bandwidth to analyze all the information that going to come in, and that added capacity is likely to rely mostly on software and the Internet.
“Many users want to adopt more analytics, but they don’t have the underlying infrastructure. They have historians, databases, alarm tables, management execution systems (MES) and enterprise resource planning systems (MES), but the way all these separate elements are stacked can’t scale up,” says Jeff Knepper, president of Flow Software. “Forty years of point-to-point networking and trying to connect and move data from historians to ERPs requires users to spend 80% of their time on a typical analytics project cleaning raw data. This is because the raw information and their communications aren’t standardized, so it can’t be understood by an enterprise system that’s trying to analyze it. This is a lot of work for just a few diamonds of insight in the rough, especially when these analytics projects are only going to get more massive.”
Likewise, just as scalability and unstandardized data are a problem between plant-floors and enterprise levels, they also make it difficult if not impossible to gather and analyze information from multiple facilities. These sites each have their own data silos that typically haven’t established common definitions or formats, and frequently can’t share detailed enough information to be useful for advanced analysis.
“How can you govern definitions for analytics from sources spread across 20-30 locations? We have to step make and determine what will work for all of these sites and applications,” explains Knepper. “This is content that only the frontline guys at each facility have, but it has to be
filtered and made consistent, so the data scientists can analyze it. This means adding some metadata and context about the state of each process when the data was collected.”
Central library for multiple locations
Flow Software accomplishes this by providing a central library of definitions, so users can bridge their operations and enterprises with a common work environment, and run Flow Software’s engine for analytics, data storage and distribution. To break down former silos and barriers between data sources, Flow Software advises each location to establish a unified namespace (UNS) architecture, which groups all of the site’s data producers and consumers around a MQTT broker. This publish-subscribe communications protocol lets each producer and consumer share their data with the others without the cumbersome, dedicated protocols and networking of the past. The UNS at each site then connects to Flow Software’s Unified Analytics Framework (UAF) that delivers of large amounts of historical data to the enterprise across the same architecture. This bidirectional link also takes data back to field devices, such as PLCnext embedded computing/control modules from Phoenix Contact.
“UNS connects to UAF and provides data streams that are easy to use and feed back to local hubs, historians and PLCs,” explains Knepper. “This lets users know immediate outcomes at the enterprise level, and allows production sites and control systems to benefit from real-time analytics and long-sought predictive analytics. This has been a holy grail or many users because it can identify causes and correlations, such as achieving 90% accuracy in identifying which components will fail soon. If it brings together the right databases and receives sufficient data, predictive analytics can also indicate what replacement parts are available, which technician is on vacation, which product is facing a deadline, and what actions could reduce production run rates but still meet that deadline.”