Data loggers require thoughtful engineering for minimal confusion, loss of information
The boiler suddenly shutdown at 3:45 a.m. on Saturday, and the crew on shift was befuddled. All had been quiet and routine up to the unexpected shutdown or “trip” (a term adopted from the vernacular of electrical circuit breakers). When the day supervisor arrived to assist in the restart, she wasn’t satisfied with “something got us”—before relighting the boiler, she wanted to be certain the root cause was identified and addressed.
While the burner management system (BMS) has an annunciator capable of indicating the “first out” alarm among the many possible interlocks, it had been reset in the ensuing chaos, and no one on night shift was trained or could recall which alarm, if any, was flashing to signal the culprit. Neither was her site equipped with a high-resolution sequence-of-events (SOE) recorder, which are typically capable of sub-second resolution of alarms and events. So they began staring at trends leading up to the trip. Flame detectors, fuel pressure, steam drum level and air flow all participated in the boiler interlocks per National Fire Protection Association (NFPA) guidelines. With all recorded measurements going to their tripped state at nearly the same instant, it was a challenge to ascertain which one—if any—caused the shutdown. Was it a shutdown for cause or a spurious trip caused by an errant measurement?
Process engineers and management can glean much from trends; prior to modern computerized historians, we were pulling reams of paper out of strip chart recorders. If you’ve ever filled the ink reservoirs of an old recorder, you remember how difficult it was comparing events across numerous variables.
Digital and recorders have been around for a few decades, as well as microprocessor-based “paper” recorders. Today, paperless recorders are available with hundreds of I/O, upwards of 20 virtual pens, gigabytes of memory, PID control and sampling rates as fast one microsecond. Sampling rates may be less impressive or useful if the data is coming via a digital link like Modbus, Ethernet or wireless, as most will be subject to variable latencies, and the true timestamp of an event or deviation might not correlate with I/O hardwired to the recorder.
A cautionary corollary
Woven into their impressive capabilities is a cautionary corollary: the more points you monitor and/or the faster the sampling/logging rate, the quicker that onboard memory is consumed. A 500 MB SD card might not last a long weekend if hundreds of points are monitored at sub-second intervals. As modern recorders include options for Ethernet, Profibus or OPC interconnection with host systems, the cleverer among us might be able to script routine backups that prevent data loss. However, this creates a storage/archiving challenge in the host, doesn’t it?
Host-based historians have been around for decades as well, and their decades-old heritage has lingered into present-day defaults for sampling and data compression, as was explored in Control's September 2019 article, “When real-time data isn't that real.” Like the paperless recorder, compromises are advised to prevent overloading historians. During detailed engineering, it’s common for the configuration team to seek a default setting—who has the time or brainpower at that stage of a project to individually set historian sampling rates and compression for thousands of points? What usually happens: we start up with global defaults and don’t detect sampling rate issues until after we discover they’re too slow. When it comes to discerning cause-and-effect between highly correlated variables, one can be led astray. One variable might appear to precede another only because no samples were collected from either in the 10 seconds between archived data points.
Whether we’re operating a small package boiler or a large chemical complex, data loggers from paper recorders to DCS historians deserve thoughtful engineering to ensure minimal confusion and loss of information. Let’s spend some time evaluating and optimizing our sampling rates and storage consumption before our operations clients have a need for the history.