Sensors, and by extension associated input/output (I/O) cards, are often overlooked but critical parts of any control loop. Like all parts of control systems, the capabilities of I/O systems continue to evolve. Enhanced capabilities and Ethernet/packet-based communications to connect a control system’s nodes impact that evolution.
Several DCS manufacturers have distributed I/O, claiming to increase design flexibility and save cabling costs by supporting installation of I/O “in the field” or at least closer to field devices. PLC suppliers have the same idea, offering remote I/O that can be installed in a cabinet close to field devices, and communicate back to the controller using Internet protocol (IP)-based communications. Other control systems reduce their hardware dependence by making I/O software configurable.
Intelligent terminal manufacturers also offer slice I/O, in which each terminal block sits on a backplane, connected to either a PLC or communications card. This is similar to the PLC model, but with the advantage of supporting almost any protocol and the flexibility of only buying the I/O the user requires with a smaller footprint than DCS or PLC options. This option appears closest to the Open Process Automation Forum's (OPAF) distributed control node (DCN) concept.
Are you detecting a trend here? I know I won't miss it because there was more than one project that I started, which was quickly cancelled once we discovered there were no more homerun cables. Of course, they were mostly in remote locations, such as tank farms, where running several hundred meters of cable was a deal breaker. There are quite a few alternatives now including wireless systems, which is another nail in the coffin of the multi-conductor homerun cable system.
All of the above still requires local, normally UPS, power. Fortunately, manufacturers of power supplies offer a range of rail-mounted redundant power supplies with local battery backup suitable for installation in Class 1 Division 2 (Zone 2) environments. All users really need is field power from two separate buses (redundant power supply from the same local field circuit quickly gets to that single point of failure) and the optic-fiber connector back to home base.
Ethernet-Advance Physical Layer (Ethernet-APL) can be another step change in how we connect to field devices. APL can supply power and Ethernet signals over one twisted-pair cable. One cable goes to each cabinet. Or, if redundancy is needed, two cables are used or maybe a ring, which is just two cables going to different places.
One fortuitous aspect of Relcom’s legacy fieldbus, Megablock, is that it was the same length as the equivalent number of terminal blocks (positive, negative, ground) that it replaced. This included four fieldbus devices in the same space on a terminal strip as 12 terminal blocks. In my project, I also had to change my field devices to Foundation Fieldbus, which wasn't trivial. However, it would have been easier if I'd built an APL-connected gateway that converted my protocol to its packet-based equivalent (i.e. HART to HART/IP). Even so, it wouldn’t have be much harder to have the gateway also change to a different protocol such as ProfiNET or EtherNet/IP.
I/O systems have come a long way from being terminal blocks or dedicated cards in central control rooms. They continue to evolve, driven by market demands for higher data concentrations, enhanced capabilities, and more flexibility at lower cost.