ChatGPT is the New Ignition Intern!
In our first post exploring ChatGPT and Ignition for creating Perspective Views, we supplied ChatGPT with all of the information it needed to create a basic data entry form. We were using the GPT-3.5 engine. Since that post, we upgraded to using the GPT-4 engine with all of its additional power. Our initial idea was to see what the differences between the GPT-3.5 and GPT-4 engine were, and to update the previous post. We quickly realized that we needed to do more than just an update.
Click here to watch a follow-along video version of this post
Can ChatGPT-4 Learn to Create a Better Perspective View?
For starters, we asked ChatGPT to give us detailed instructions on how to create a Perspective view, add components to the view, and publish our project.
ChatGPT even was kind enough to create a script for us to write data to the database:
This this level of detail on par with what you will find on the Corso Systems blog, or maybe even in a training class. Compared to industries which use more mainstream technology platforms like .NET, any major Web Development framework, or Python, our industry usually lacks information on sites like Stack Overflow.
Given that ChatGPT gave us detailed instructions, we can surmise it knows something about Ignition. Let’s see if it can tell us what information we need to provide so that it can generate a view for us.
That’s a pretty good start! Of course to generate all this information from scratch, you would need to know a lot about Perspective views to begin with. Since we are experts at Ignition Perspective, we can go as far down this rabbit hole as we want!
But wait, there’s more! ChatGPT decided to give us a basic JSON example:
You might be tempted to import this directly into a Designer to see what happens. Of course, it won’t work as is, but we decided to play ball with ChatGPT and see where we ended up. By working with the examples from our previous post on this topic, we were able to compare the output from this session and a working view—and then give ChatGPT more instructions on how to fix the JSON:
Importing this JSON object will still result in an error, however looking at what we missed the first time around, it is easily corrected by giving ChatGPT more information.
Honestly at this point in the process it felt like teaching someone who came to the table with some programming experience—but who was brand new to Ignition. I think we found our newest summer intern, ChatGPT!
After the last correction, this updated JSON imported without any errors! Of course at this point, we had spent 4-5x longer training ChatGPT than it would have taken to simply build the view ourselves. But, not all innovation and progress is linear. Now, we’re going to set a goal and see if we can meet it using ChatGPT.
The Next Goal: Perspective Symbols, Pipes, Tag Bindings
Our next goal was to see if ChatGPT could create a view with fields, Perspective symbols, and pipes. Then, if we can get it to add tag bindings to the graphics, and a working button on the view, we can use it to write data from the fields to the database.
Adding Perspective’s Pipe Tool to the Mix
OpenAI has stated that their training dataset was focused on data prior to 2021. The Ignition Perspective pipe tool was introduced in mid-2022, so we would not be surprised if ChatGPT was stumped on this topic. However, when asked about pipes, it answered with a relatively decent idea for using the path tool:
While using the path tool instead of the pipe tool was a creative idea, let’s give ChatGPT some hints and create a view with pipes—then we can pass it in with a description:
ChatGPT understands this new prompt pretty well, and generates a decent JSON response we can use. Everything is correct except that it has placed the pipes incorrectly. But, being able to get pipe placement correctly would have been very impressive, since the pipes on a Perspective View are not a component like a button, text field, etc. they exist on the root container—essentially as a graphic in the background. Telling ChatGPT how to do it correctly gave us the right result.
Importing this view gave us the same pipes we sent over to ChatGPT, however it also means we are in business to add pipes!
Now, lets put these “Lego bricks” together and see if we can get the first view with a text field, add another text field—and then get the pipes working. Because we used first name and last name in the previous post—and we already have the database set up—we will use those same fields here.
For reference, here is the JSON object we have at this point of the project:
While it is pretty impressive for ChatGPT to have generated this view—with not minimal guidance—it isn’t the most interesting SCADA screen ever. So, let’s give ChatGPT some more information and see if we can come up with something a little more interesting.
Many SCADA screens use graphics to represent equipment like pumps, tanks, and valves, and Perspective is no stranger to this approach. We can use Perspective symbols from the Perspective Component Browser to build a more interesting view. Like Pipes, these symbol components were released after 2021, so we will need to give ChatGPT an example to work with.
As expected, ChatGPT returns JSON with the pump and tank added to the view, however the results still aren’t terribly interesting. Now, we will add a new pipe to this view, connecting the pump and the tank. We’ll also tell ChatGPT to remove the old pipes and generate a new view:
ChatGPT pulls this all into a new view for us:
Since the views are getting longer, we’ll need to start using our favorite prompt: “Continue”:
Finally, here is the new view imported into Perspective:
This view gives us what we expected, but we want to start working on the design a bit more. Since the pump and tank are lower than everything else in the view, we want to move them up to be just below the “Last Name” field. We will tell ChatGPT to move everything up and to the left.
ChatGPT has the right idea, however we didn’t think to anchor everything based on the tank height, so in the new view it overlaps the text fields (see image below). This portion of the process is frustrating if we want to do relative movements with ChatGPT. There are two better options. One: exactly specify the location of components like graphics if we need them in a particular spot. Or two, get the design close to what we want by using ChatGPT, then manually moving the components on the final view in the Designer.
Another option would be to group the elements and move them all as a block, although we didn’t get into that level of complexity in this post.
Here is the view that ChatGPT generated, and which we imported in Perspective:
We tell ChatGPT to lower the graphics portion, and will see how well that works with the new response:
It’s better, but still not perfect:
While this updated view is much closer to what we want, it’s still about 15-20 pixels too high, and the tank is overlapping the bottom text field. Let’s tell ChatGPT to lower it some more and see if that helps:
At this point, it is clear to us that relative layouts will need more work to be useful with ChatGPT. Based on the time involved in getting it perfect, it would be a good time to decide if you want to position things absolutely or do it manually after ChatGPT adds the components to the view. But, since this post is a proof of concept, we will give ChatGPT one more hint and then leave things as-is for now. Also, notice we only told it to move the tank down, we didn’t tell it to move the pipe and the pump, so now the pipe connection point is off on the tank side. This would be solve-able with more explicit instructions, and perhaps grouping the graphics together instead of using three separate objects.
To avoid having to wait for ChatGPT to type out the ENTIRE JSON object, we will ask it to lower the tank a few more pixels the next time it generates the JSON, but not to do it right now. ChatGPT says it hears the instructions loud and clear.
Since we are skipping more work on the graphics, let’s tell ChatGPT to add a button to the screen. As in our previous post, we will use this button to write data back to the database:
Importing the new JSON into the view in the Ignition Designer adds the button, and based on the instructions prior to this prompt, it also lowers the tank down by 5 pixels. Unfortunately, it also moved the pipe. Oh well, we will just leave that alone for now, and adjust the positions manually as needed.
Now, we will work through a series of prompts to correctly configure the button, as the original prompt has the correct information, but in the wrong format for the script to work properly.
At this point, we also realized ChatGPT didn’t know about the “meta” node in the JSON tree, so we fixed that issue then adjusted the button’s onClick event to work properly. All of this was figured out by comparing the ChatGPT output to a working button with an onClick event from the previous post:
At this point, we were able to get the button click to write data to the database—except it was only saving empty text strings. Unfortunately this version of the exercise didn’t automatically bind the text fields to the custom properties.
Let’s train “the intern” to bidirectionally bind properties on a component to custom properties on the view. We’ll also test how smart ChatGPT is in the process by not give it as much information as we did in the previous post:
ChatGPT does not understand how the bidirectional bindings work at this point, so we need to show it an example:
Using this information ChatGPT was able to correctly bind the text fields to the View’s custom properties and we are able to write data to the database.
Since we were reaching the end of the Intern’s first day, we decided to feed it an example of how to do a tag binding on the tank so that we could set the level on the graphic with a tag value. We kept the information pretty generic, so that ChatGpt had to do at least some thinking:
Our example JSON included the information we provided, let’s give it the information it needs to to an actual tag on the tank, and the pump since the property it needs to bind to is different than on the tank:
Now that we have met all of our goals, let’s have ChatGPT generate a final view for us:
Here is the JSON that ChatGPT generated for the final Perspective view. Immediately following the code is how it all looks in the Ignition Designer! We did manually move the pump down and to the right a bit so everything looked good for the screenshot.
Wrapping Up
Like we said in the previous post, ChatGPT can be a powerful tool. It isn’t going to replace software developers anytime soon, and—as we saw here with the graphic layout—you still need to put in some significant effort to get it to work exactly how you want it to.
ChatGPT would be a very powerful tool for a complex dashboard or tabular data screen where you needed to update a lot of tag bindings and for whatever reason didn’t use templates, or if you needed to build forms based on a database schema. It would also be a powerful tool for managing tag creation and validating scripts.
Going deeper than we did last time, and looking at ChatGPT through the lens of “we’re training someone new to Ignition how to build a Perspective View” it is clear ChatGPT can learn, can write decent code, and can take on tasks suited for an intern or very newly minted Associate Engineer.
We’re excited to see where ChatGPT goes over the next few months. It is an exciting time to be in tech!