Industrial Computers, Part 2. Data Processing Escapes the Enclosure
By Jim Montague, Executive Editor
Computers have gone from being impersonal to very personal to basically everywhere. In a few short years, they evolved from being huge calculating devices in laboratories to individual data processing units on everyone's desk or lap. And now, it seems like they're going back into centralized server farms to manage the virtualized computing and cloud-based services we'll all be using soon.
Of course, this technological ebb and flow is driven by the evolution of computing that's grown relentlessly more powerful, faster, smaller and less expensive—and the plant floor is no exception. Industrial computers have followed this same path, and process control engineers, technicians and operators have gone from using bulky desktop boxes in costly enclosures to sealed touchscreen HMIs and onward to tablet PCs and smart phones. And, because of their faster, smaller, cheaper microprocessors, computers can take almost any form, be embedded in almost any front-line device from sensors to motors, and perform data processing in almost any location or process control application.
For example, Marquis Management Services Inc. (www.marquisenergy.com) in Hennepin, Ill., operates several ethanol refineries in the midwestern United States, so it's seeking long-term sustainability and striving to be the low-cost provider in its industry. "We have a lot of data to collect and analyze to better predict operating parameters and reduce variability," explained Jason Marquis, president of Marquis. "Even small, 1% improvements in production can mean millions of dollars in savings, so we're creating multivariable process control models with help from Rockwell Automation's (www.rockwellautomation.com) engineers that can help us produce the highest-quality product at the lowest cost."
Because many of its bio-refineries are located in small, remote towns, Marquis is even connecting key off-site engineers with these facilities by giving iPads to some of its local operators, which gives everyone access to the data they need—both for routine operations and to run its new multivariable models. "This is also empowering people who have been using mostly wrenches for much of their careers," added Marquis. "Now, instead of the maintenance guy having to radio in from the field and then wait for actuations to come out from a central control room, he can take the iPad into the field and make direct adjustments as needed."
On the Web, in the Cloud
No matter how much computing formats evolve or where they're located, the ultimate goal of process control is still production optimization and efficiency. Speedy networking is allowing all kinds of data processing via the Internet so users can perform basic calculations on websites; or contract to have much or all of their computing done by cloud-based services; or even set up internal servers that can do the computing for many employees and applications.
For example, Bronco Wine Co. (www.broncowine.com) in Ceres, Calif., not only produces and ships more than 45 million liters per year of its own Charles Shaw brand, but other vintners also use its facilities to produce their own wines (Figure 1). These processes require lots of rigorous testing and process validation, and Bronco's expanding operations need this data to be distributed to a growing user group. Unfortunately, Bronco's former SCADA solution for environmental controls and HMI interfacing was experiencing sporadic and sometimes unresponsive communications, but the vendor was reportedly primarily interested in being paid to re-license Bronco's existing users. In addition, Bronco's enterprise plant management and SCADA solution needed to integrate with databases from other enterprise software applications, such as ProPak and IFS Maintenance Management Software. Also, the company needed to provide scalability for more than 150 clients in its four California plants, plus remote access for troubleshooting.
Figure 1: Bronco Wine Co.'s winemaking plants in California use Inductive Automation's web-based Factory PMI software for process monitoring, control and troubleshooting, and has hit daily productivity targets and improved overall efficiency by 30%.
To better coordinate its production and enterprise systems, Bronco recently implemented Inductive Automation's (www.inductiveautomation.com) FactoryPMI plant management and SCADA software. Using an SQL database as its engine, Factory PMI is based on Java and OPC software platform and employs a web browser to launch its client interface, so any computer that can connect to the network and run a browser is a FactoryPMI client. This lets users with login identifications access the system at whatever level their login group specifies and allows administrators to add or delete users in real time from anywhere. FactoryPMI's security model also enables administrators to fine-tune projects, set user policies and track activities at every client.
"I can be in Napa and see what's going on in Ceres right now," says Paul Franzia, Bronco's engineering manager. "Sometimes I tell the supervisor that there's a problem before he even knows. FactoryPMI has given us insight into our business." As a result, Franzia can log on wherever he is, view whatever screen his supervisors are looking at,and provide instant feedback. Using the software's graph trending, data logging tables and click-to-graph function, Franzia adds that he can easily track and pin-point operational issues to solve problems immediately.
Similarly, Ken Cullum, maintenance manager at the Ceres winery adds that, "Our refrigeration guys used to record the same data in four different places. Now they enter it via the web at any FactoryPMI station."
In addition, though its four main facilities are many miles apart, Bronco needs them to appear on-screen as if they were under the same roof. Fortunately, FactoryPMI's project redirection feature allows that to happen. There are presently six servers running FactoryPMI, including four in Ceres, one in Escalon and one in Napa. Each server has certain projects running on it. However, when a user needs to view a different part of the operation, the software redirects the client to the required project, even if it's on a different server. This "server clustering" method also allows FactoryPMI's servers to be redundant and run the same projects. In the future, Bronco plans to configure the servers in a clustered environment to provide added redundancy.
Bronco's staff also uses FactoryPMI to serve up data to help them manage their tanks and raw materials more efficiently. For example, if a tank's temperature is too high, they'll be notified, and can adjust their procedures. Bronco is also moving to adopt wireless tank gauging, which will be monitored by FactoryPMI.
Likewise, during the 24/7 grape harvesting season, known as "crush," FactoryPMI works with the company's ProPak software and its "grower relations" database. ProPak analyzes each load, FactoryPMI interrogates the ProPak database and compares this information with its own database. Because it's crucial for the right truck to dump into the right pit, they're only allowed to dump if their documentation matches correctly.
"It makes me a better manager," says Franzia. "Efficiencies have improved upwards of 30%, productivity targets are hit everyday, and I can be more responsive to the business and to my managers."
Guts of Virtualization
One of the most amazing aspects of the data processing revolution is that computing power has grown so fast that many applications haven't kept pace. So many PCs use only a small fraction of their capacity, and the rest goes largely unused. This is where virtualized computing comes in.
Honeywell Process Solutions (HPS, https://hpsweb.honeywell.com) reports that virtualization can slash the number of PCs needed to perform the same amount of data processing by 75% or more and produce equally huge savings in maintenance and power consumption. This is achieved by breaking the formerly unbreakable bond between the operating system (OS) software and hardware running traditional one-box PCs, and instead enabling one computer to run multiple OSs for multiple users at the same time.
"Users want to reduce the number of PCs in their facilities and their total cost of ownership (TCO), but they can only do it if they don't compromise existing safety, reliability or production," says Paul Hodge, Honeywell's product manager for Experion Infrastructure and HMI. "However, as PCs evolved, they became increasingly inflexible due to the tight coupling between their OSs and underlying hardware, so the challenge for virtualization is to break this coupling between these layers."
Hodge added that virtualization consists of three main families of computing technology that can enable much greater levels of computing flexibility and agility. These include platform virtualization, which extracts the OS from the hardware; application virtualization, which separates the application from the OS; and client virtualization, which extracts the user interface from the OS. "Without platform virtualization, users must run multiple applications on separate OSs in separate boxes, so they end up with very low utilization of their data processing workload," said Hodge. "However, computers have gotten much faster lately, so most applications only use 5% to 10% of their individual PC's resources, and this leaves a lot of those resources and money on the table."
Hodge added that, consequently, virtualization is achieved by placing a thin software layer, called a hypervisor, between the OS and underlying hardware, and this enables multiple OSs to run and be supported on one PC box. Also, this hypervisor includes a "virtual hardware layer" that emulates x86 computing, and gives it all the same operating parts and functions as a regular PC.
"Virtualization also improves site protection because users can 'snapshot' computers back to before problems occurred. It's also much easier to restore virtual machine files," said Hodge. "In fact, if an entire site somehow becomes unavailable, the whole site's virtualized computing workload can be moved from one location to another. Without virtualization, you have a large number of servers that can be hard to manage, interoperability problems, and hardware that's time consuming to procure. Platform virtualization reduces the number of servers, allows better server and client manageability, improves interoperability, but preserves needed isolation in the virtual machines, and increases server and user agility."
Ron Kambach, platform and supervisory applications product manager at Invensys Operations Management (www.iom.invensys.com), explains, "The basic benefits of virtualization include server consolidation with smaller OS footprint and virtualized hardware, and reduced costs by using less space, facilities, hardware, maintenance and power. Virtualization also provides application compatibility by using OS isolation to help run legacy and incompatible systems and applications, and allows centralized management, faster installation and deployment, and greater use of software templates. For example, users can snapshot multiple versions of virtual machine, so if one goes down, they can just go back the version from 10 seconds earlier. In fact, users can have a library of different devices and easily set up a virtual network or put together a sandbox of tools to meet the needs of particular applications. To accomplish these functions safely, host servers should always have spare resources about 25% above what the virtual, guest machines require."
However, Kambach adds that "Virtualization 2.0" enables more than consolidation. It also permits simpler installation and movement of software apps, lockdown of corporate PC images, better software distribution, backup images of virtual machines for quicker recovery, restacking workloads for much easier, on-the-fly work movement, isolation of hardware differences, and division of functions into smaller virtual servers. In addition, Kambach says that some market predictions for virtualization include the likelihood that the software "hypervisors" that enable them are going to become commodity items; management solutions will be available for sale from vendors; users will be able to set up either private or public cloud servers that include virtual machines; and their resources will be organized and managed as a "fabric" that includes optimization and lifecycle control.
Friendly Faces on New PCs
One of the perks of high-capacity data processing is that users can make initially alien-looking computing tools look just like familiar instruments and displays. For instance, National Fuel Gas in Williamsville, N.Y. (www.natfuel.com), recently partnered with engineering integrator EN Engineering in Woodridge, Ill. (www.enengineering.com), to upgrade a few of the 40 compressor stations that move natural gas over its 2877 miles of pipeline that bring gas to its 728,000 customers in western New York and northwestern Pennsylvania. The upgrade was also needed to help National take advantage of increased development and gas recovery in the local Marcellus Shale region.
The initial project upgraded 12 compressor units at two compressor stations, one in Roystone, Pa., and the other in Independence, N.Y. The Roystone station has eight Ajax compressor units, five headers, six operating configurations, and a storage field of 2.5 billion cubic feet (BCF). The Independence station has four Ingersoll-Rand compressor units, four headers, 10 operating configurations, a gas dehydration unit and 4.0 BCF storage field. The upgrade's main challenges were to understand and replicate functionality of the existing controls; integrate new control systems with existing systems; interface new control panels to existing equipment and instrumentation; and prevent disruption of operations during installation. (Figure 2)
Figure 2: National Fuel Gas upgraded 12 compressor units at two stations in New York and Pennsylvania with new panels, PLCs, I/O devices, fiber-optic networking and PC-based HMIs that replicate the familiar look and feel of its legacy instruments.
"We used a unitized design concept, and then employed Rockwell Automation's ControlLogix PLCs with Flex I/O, as well as redundant PC-based HMIs with Factory Talk View SE at the station level, and PanelView operator interfaces with Factory Talk View ME at the unit level," reported Jennifer Shaller, National's lead electrical engineer. "We also used a plant-wide, fiber-optic control network with Stratix managed Ethernet switches, put all control functionality in a PLC, hardwired our shutdown circuits, and made sure we followed a Class 1, Division 2 design."
Shaller added that the upgrade has given National's two stations more consistent and reliable control, fully automated compressor operation, more efficient station operations, enhanced data collection, improved diagnostic and troubleshooting capabilities, improved reliability of the control systems, improved mechanical protection of integral compressor units, and opportunities for additional control functions.
"The new compressor controls have all the legacy look and feel that our operators needed, but they no longer have to deal with the stress of continually babysitting them," explained Shaller.
Leaders relevant to this article: