In beer brewing, temperature control is critical because even slight deviations can affect quality. After the ingredients are mashed, the resulting sweet liquid wort is chilled and boiled, and sent to fermentation tanks. Their temperature must be precisely controlled, so the wort doesn't get too hot and ruin the batch.
"There can be many reasons for differences between set points and actual gaps in performance," says David Lewis, technical services manager at Sierra Nevada Brewing Co. in Chico, Calif. "They can be due to a sticky solenoid valve, one or more failing resistance temperature devices (RTD), brew house scheduling putting excess glycol flow into one cellar, issues with chillers or other reasons. We discover these in the process of finding data to optimize production."
Recently, Sierra Nevada set out to achieve proactive identification of failing RTDs and improve its data visualization tools. Lewis reports that Sierra designed and built a laboratory information management system (LIMS) and data acquisition (DAQ) system that included Ignition software. Lewis presented "First steps to predictive analytics at Sierra Nevada Brewing" during Inductive Automation's Ignition Community Conference 2016 on Sept. 19-21 in Folsom, Calif.
Taking tank temperatures
At 35 years old, Sierra Nevada is one of the oldest craft breweries in the U.S. It has about 100 tanks at its plant in Chino, Calif., including two main brewing cellars, each with 10 to 14 beer fermentation tanks. Each of these 15-foot-wide tanks has a 1-foot-deep well in its side near the bottom with a RTD, which is made of ceramic with a metal coil. The tanks also have layered-zones, and two or three RTDs are located high and low on each tank, which can generate two different data flows on the same three- or four-week batch. Temperature is controlled by the solenoids and glycol flowing to jackets on the tanks, as well as an overall chiller plant. All of these devices can have issues that could influence fermentation temperature and beer quality.
"The tanks and one cellar tightly grouped, while those in the other cellar are more spread out," says Lewis. "So we wanted, at a glance, to be able to show that there was either no problem, or there was an item we needed to look at more. We also wanted bar graphs showing number of batches per year per tank, as well as number out of spec incidents. We also wanted drill into the data about the last several batches, so we could check the details of several out of spec incidents."
To work toward its predictive analytics goal, Lewis reports that Sierra brought in Inductive Automation's Ignition SCADA software, which it first used for data acquisition; employed a transaction device; and moved data into tables for 200-300 tanks, brew kettles and supporting devices. "We capture data every five minutes, and we already had about 10 years worth of information in our database," says Lewis. "Once we began using Ignition, we realized we could flesh out our data, and link it with our MES."
Organizing information
This doesn't mean there weren't some growing pains, however. "In early alert attempts, we had to determine which RTD was indicating that it was beginning to fail on a tank, such as when the chiller was letting it run on," explains Lewis. "We also had to figure out which data tails to exclude, though this might mean missing some stuck solenoids. We also had to avoid email overload because the operators' emails were filling up, which could numb them, or eventually lead them to ignore messages.
[sidebar id =2]
"We had to step back, prioritize our data, and try make it more proactive. We needed to differentiate between a tip that something was about to happen compared to something that wasn't gong to happen right now. We also needed an appropriate context in which we could see everything together, so we'd know when operations were happening that were supposed to be happening, instead of looking tank by tank and batch by batch. This mean bringing in much more data, but then separating useful signals from noise."
Because data is captured in five-minute intervals for the brewery's more than 100 tanks, Lewis reports this generates about five software database rows of data per tank per batch. At about 2,000 batches annually, Sierra's brewing operations produce a total of 2 million software database rows per year. To access this data and begin to improve decisions, he adds that he and his colleagues are using Dynamic SQL to build queries for their database. Dynamic SQL is a programming method that lets users build SQL statements as their operations and applications are running, which allows them to create more flexible applications because the full text of a SQL statement may still be unknown during compilation. Lewis adds that Sierra also uses Tableau data visualization software to view database results, which are displayed in conjunction with Ignition software.
"Our MES is located on one server and batch data is on the DAQ server. This means critical data was on different servers, but our initial cross-server queries weren't working, and replicating data from one server to another was too cumbersome," says Lewis. "We needed data from 2,000 batches with their own start and stop times. so we ran a query string using Dynamic SQL, joined one with another, did it 2,000 times, and it crashed. So, we threw a hail Mary, and cut and pasted the 2,000 queries into Tableau, and five minutes later, we got the 2 million rows we needed."
Lewis adds that all the displays for its newly enabled datacase were built with Ignition, which allows users to click on each batch and see a profile for it. "Then we can use known and previous failure patterns to better determine when the next RTD is going to fail," adds Lewis. "We can see drift and behavioral changes in the graph for a batch, and fix problems before they become failures."
[sidebar id =1]