Ignition Cloud: AWS, Kinesis, and MQTT
In manufacturing, technology is often a double-edged sword.
While many new and amazing tools are available, manufacturing companies need systems which can run for at least 5-10 years. Manufacturing is often a few years behind tech companies when rolling out new tech, because 80% of what exists today won’t be relevant or supported in 3-5 years.
When manufacturing companies decide to implement new systems, they tend to choose ones with multiple interfaces for getting the most value out of the technology. While eventually you can integrate everything into one interface, we prefer to map out design standards ahead of time to save cost, validation and testing time, and overall stress once you are using your tools.
As the world moves towards Cloud-Based SCADA solutions like Ignition Cloud Edition, and technology hosted on AWS, Azure, and other cloud providers, it’s more important than ever to start architecting systems up front for openness and connectedness.
MQTT, Kinesis, and Getting Ignition Data Into AWS
Typically, industrial data flows from Ignition into AWS through tools like the AWS IoT Greengrass edge service (to convert to MQTT where needed), then it passes into AWS using MQTT through AWS IoT Core. Tools such as Cirrus Link’s AWS Injector Module can also send data from Ignition to AWS directly into Kinesis Data Streams/Firehose.
Once your data is in AWS, it will usually pump through the various services using a protocol called Kinesis. At its core, Kinesis is another communication protocol for formatting data so that it can efficiently pass large amounts of information to the services which need it to perform.
This process can be as simple as storing historical data in an S3 Bucket, or as complex as feeding it into complex machine learning tools to perform predictive maintenance. Kinesis also has an option for streaming video data, enabling complex robotic operations, quality detection systems, and real-time equipment monitoring.
From an automation perspective, the major difference for using Kinesis is that it’s used mainly in the internals of AWS and not for process control, where you would see protocols like Modbus, OPC-UA, Ethernet-IP, and MQTT.
Working with Kinesis Data in AWS
Kinesis provides a mechanism to ingest data into the system with Kinesis Data Streams. At a high level, Kinesis Data Streams are roughly similar to MQTT brokers—you can easily read data from them once the data is sent to the Data Stream.
Kinesis Data Streams are designed to provide short term data storage within the stream itself. This means you can have up to a week of historical data immediately accessible at any time.
Kinesis also has a data transfer mechanism called Kinesis Firehose, which can move large amounts of data into Amazon S3 buckets, data warehouses like Redshift, and data analysis tools like Splunk, and ElasticSearch.
While Kinesis Firehoses are not designed to be read by other tools, they are designed to send data to specific systems for further processing. Depending on which tools you want to use for extracting value from your data Kinesis Data Streams, Firehoses have applications in the manufacturing industry, although they require a different mindset to be as valuable as possible.
While Kinesis is a powerful tool, and MQTT has done amazing things for moving manufacturing data around, they are designed to accomplish different things. That said, they must be integrated at various levels of the technology stack to get the most value from the tools and technology available right now.
Integrating Data From AWS with Ignition
At Corso Systems, we’re often asked to consolidate multiple systems or tools into a common interface.
In our world, this common interface is Ignition. Ignition can talk to nearly every operations technology: databases, ERP, quality control, warehouse management, training systems, HR systems, and all manner of process control equipment on the plant floor.
On AWS where Kinesis is king—and using tools inside of AWS is their preferred operating methodology—we find ourselves directly at odds with the idea of using a single pane of glass to run a business with Ignition.
To remedy the situation, we need to get data from AWS and back into Ignition. We’ll need to use AWS IoT Core and manipulate the data inside AWS with Lambda Functions on the Kinesis Data Stream to publish data to Ignition via MQTT.
Once the Kinesis Data Stream data is pushed to Ignition via MQTT, we need to break it down into a structure that Ignition can understand and publish to the relevant tags.
When data is published to tags, it becomes available for anything in the Ignition architecture to access and then make decisions with. This could be as simple as updating a dashboard with a calculated value from AWS, or as complex as machine learning algorithms in AWS for perform predictive process control for adjusting a setpoint in real-time based on the AWS calculation.
Wrapping Up
Getting data into AWS is nearly a built-in feature at this point in Ignition’s lifecycle. Cirrus Link modules or tools like HiveMQ make it easy to write to AWS IoT Core or AWS Greengrass.
With the Cirrus Link AWS Injector Module, we can directly write data from Ignition into a Kinesis Data Stream. Then, using the tools already available in AWS, we can retrive data from AWS back into Ignition with minimal coding.
At Corso Systems, we have done extensive work integrating AWS and Ignition. If you need help to solving your most complex problems or have questions about how Corso Systems can help, please reach out and let us know!
Ready to Integrate AWS and Ignition?
We can help solve your most complex problems!
Skip the line by scheduling a short intro call with Cody Johnson in sales.
Or contact us with the details of your project.