Intelligent Automation Series Part 3: Information Systems

Welcome to the third post in the Intelligent Automation Educational Series, there are a total of six posts. Please let us know if there is anything that we can do to help. Read part one of the Intelligent Automation series here, you may want to start with that post to grasp the basics of the series.Information systems build on the foundation of process control systems by providing the data and tools to better understand the health of a process over its lifetime. This begins with collecting and storing real-time process data that can be analyzed with trending tools, reports, and dashboards. Information systems are the first step towards a fully connected enterprise.  

Tracking Process Conditions

The most common approach to implementing information systems begins with a process historian. A historian automatically collects data from process controllers, storing it in a database for future retrieval. Most process historians are packaged with a trending tool used to monitor both real-time and historical process conditions. This is useful for process engineers and maintenance staff to track down the causes of process upsets after they have occurred, and can be used by trained operators to identify problems before they occur.

The main consideration when implementing a process historian is the data must be collected, which includes the actual data points, and how often they are stored in the database. Licensing costs for most process historians is based on the number of data points that will be stored in the historian. Computer hardware costs dictate how much data can be stored and for how long. While the number of data points depends on the process and what data is required to best understand process conditions, the duration and amount of data stored can be managed through the historian's built-in data compression algorithms. This approach limits the amount of data stored in the database to only what will show changes on a trend, rather than storing every data point on a cyclical basis. For example, a raw material tank level that changes slowly over time doesn't usually need to be stored as often as the pressure in a pipe that can spike, potentially shutting down equipment. Configuring the historian to read the tank level every 30 seconds will reduce the amount of data collected compared to reading it every three seconds without losing much information on the condition in the tank at any given time.

Using data compression to require a 1% change in the value before it is stored will further reduce the data storage requirements. For the pressure reading, collecting data every 30 seconds reduces its value for troubleshooting as a pressure spike might come and go within that time. If this causes a process upset between readings, it is possible there will be no spike shown on the trend making it difficult to troubleshoot the cause of the issue. Instead, reading this data every second would capture spikes in the reading, with a compression range of 0.5% to ignore negligible changes under normal operating conditions, reduces the data storage requirements.Using data compression is not required, and storing every data point every second can be configured. However, it will lead to a large amount of data in the database and require additional hard drive space to store the data. This could lead to performance degradation when using analysis tools that have to retrieve and display the uncompressed data.

Turning Data Into Information

Historians are great at automatically collecting data from a process and making it available for future use. Historians are limited when working with data generated outside of the control system such as quality control test data, but can be coupled with other databases and systems to correlate process conditions with manually collected data. Combining multiple data sources provides additional context, making it possible to easily understand complex interactions between process conditions and product quality for example.This can be accomplished using a lab information management system (LIMS) to collect data from a quality control lab and store it in a database, usually through a web-based interface. Displaying data from the LIMS database alongside historical process data can show, on one trend or report, how process conditions affect product quality at any given time. This can be used to determine ideal process setpoints leading to more consistent product quality.

Another example of integrating production data with other data sources in use at the Jaxon Energy Hydrotreating Facility in Jackson, Mississippi is to use tank volume data from the raw material and finished product tanks to populate fields in a web-based shipping and receiving ticketing system. This eliminates a source of operator error by automatically populating the forms with starting and ending tank volumes from the historian when pumping material into or out of the facility. Using this data, shipping and receiving manifests can be generated at any time of the day without the need for operators to manually collect and record tank levels during their shift. In addition, quality control data and scanned documents are stored with each of the shipping and receiving manifests, enabling operations staff to track the source and quality of all material going in or out of the facility without having to manually correlate data between multiple sources. Information systems alone can provide useful insight into the health of a process. Data collected by the historian can be used by operators, maintenance, and engineering staff to make better decisions and back up any hypotheses they may have about operating parameters. Read Acronyms Are Hard: LIMS

Alex Marcy wrote this originally for Oil & Gas Engineering, Updated - 6/9/2022

Previous
Previous

Intelligent Automation Series Part 4: Data Analysis Systems

Next
Next

Intelligent Automation Series Part 2: Automation Pyramid’s First Two Levels