Self-contained pressure regulators have been around a couple centuries, and became mechanical engineering masterpieces long before Kevin was wearing big-boy pants. They’re an anomaly in the days of digital control, but still have many applications.
Clever Kevin was lackadaisical about understanding how these regulators work or why they might be the optimum choice for a given application. He likes controlling things from a distributed control system (DCS), so he was delighted to obtain funding to replace one with more contemporary instruments—a “smart” pressure transmitter and digital positioner controlling a sliding-stem control valve. But after installation, the system tripped every time the downstream burner management system’s (BMS) valves were opened to light the main burner.
Sometime in the ’70s or ’80s, the stewards of continuous process control systems accepted a new paradigm—their systems could be designed to consume measurements and produce outputs at discrete intervals. The pervasive algorithm for continuous control—proportional, integral and derivative (PID)—was adapted to mimic the continuous mechanisms of pneumatic and analog electronics. As our industries deployed DCSs, we accepted the compromises brought by sampled data because (usually) it was adequate for our relatively slow processes. The vast new opportunities to use digital computational tools became essential to keep pace with our peers.
In those days, students of control studied the mathematics of Laplace, Fourier, Nyquist and Bode. We abhorred “zeros” in the right-half plane of root-locus diagrams. Apologies if these terms cause some distress in the lower tract.
Hendrik Wade Bode and others at Bell Labs were pioneers in feedback control of linear systems. If you were around at a certain time in the past, you may have spent time in the lab with an analog computer. Students would analyze problems by connecting jumpers (big ones with banana plugs), adjusting potentiometers, and observing outputs on an oscilloscope. Behind the patch-panel, op-amps and various other filters and devices processed the signals to simulate a problem. While our vernacular often contrasts analog with digital, our forerunners used their computational machines as analogous to the physical system—an “analog twin,” if you’ll pardon the redundancy. But it was an arduous, painstaking process.
The original meaning of analog is preserved, perhaps, when we speak of digital-to-analog (D-to-A) or analog-to-digital (A-to-D). Arguably, 4-20 mA is an analog for another signal or measurement. All sensors produce an analog of the physical property they’re measuring—the Bourdon-tube pressure gauge produces motion, driving an indicator intended to be analogous to the actual process pressure. The dial thermometer, similarly, produces mechanical displacement analogous to temperature, while the thermocouple produces a small differential voltage that’s also analogous.
Consider any measurement: we’ve converted and scaled the analog signal to a numerical representation. Before the microprocessor, it was commonplace for our analog controls to interact directly with the process. A displacer in a chamber was buoyed by the liquid in a vessel, and changes were translated to a torque that exerted force on a flapper-nozzle, ultimately modulating a pneumatic output to a valve. The physical property of interest acted uninterrupted on other physical components, with no intervening translation to numerical values. The proliferation of bar graphs, trends and graphical gauges in our human-machine interface (HMI) is testimony to the idea that we drew deeper meaning from the more connected analogs than digitized numbers.
After decades of digitization, does it behoove us to reconnect with the fleeting realities of the pervasively continuous world? In the Wired article, “The unbelievable zombie comeback of analog computing”, contributing editor Charles Platt quotes MIT researcher Sara Achour as saying, “Analog computation makes a lot of sense when you’re interfacing with something that’s inherently analog.” It’s possible that an analog computer-on-a-chip may solve some of generative AI’s power challenges.
Kevin had to scrap his DCS-based control of the gas supply to the burners. When the regulator was reinstalled, the main burners lit every time without tripping the interlocks. He simply couldn’t sample fast enough for the DCS to react to the sudden demand for gas when the trailing BMS block valve opened. Digital control, so far, can’t match the intimately, reality-connected regulator first envisioned by William Fisher in the 1880s. As Platt mused, digitizing reality may not always be the most sensible choice.