- Speaker #0
Hello, good morning, good afternoon everyone and welcome to this new session of UAO goes live. I'm happy to welcome you today in the part two of our AI services series. So we started last month with the professor Vyatkin explaining us about the new advancement done in AI together with 64099. They were using the startup Freqsbridge to show us that. And this week, we are going not so far to the LTU University with Midun to go again and go a bit further about AI services. So, Midun, please join me on the stage.
- Speaker #1
Hi, Greg.
- Speaker #0
Yeah,
- Speaker #1
I'm good. How are you?
- Speaker #0
Yeah, very good. I'm happy to welcome you on the stage of UAO Goes Live. So, can you... please tell us a bit more about you.
- Speaker #1
Yeah, thank you so much. Yeah, my name is Midun Sevier. I'm a PhD student from Luleå University of Technology, Sweden, and I do research on developing AI agents for industrial control systems. Yeah,
- Speaker #0
AI is a very trendy topic actually. So that's why we made two parts about that because that was quite interesting and we have seen as well that the number of viewers quite exploded when we mentioned that the next topic would be AI. So please show us a bit more what you have for maybe a bit about university and so on.
- Speaker #1
Yeah, so Luleå University of Technology is located in the north part of Sweden. Maybe as you can see here, it's near to the Arctic Circle and we do research on the 64099 control system so most of the advancements in the AAE and related to the most technological aspects which we are covering over here. So that's what we do. And yeah, that's all about the university.
- Speaker #0
May I interrupt you? Because I think we don't see your slides. Yes.
- Speaker #1
Yeah. OK. Yeah. OK. OK. So I will start with that. Yeah. So today I would like to discuss about the industrial AAE agents. that is AI agents for the machines. This session will be divided into three parts. The first part will be showing the recorded demo video of the real system which is in our lab, how the AI agent can interact with that machine. And in the second phase, we'll be showing a live demo with the simulation how the AI agent can incorporate with that and interact with this. the digital twin and finally i will be introducing a agentic platform so that everybody can build their own a agent for their machine so let's move on here so as i discussed the lujo university of technology it's now it's located in the north part of the sweden and you can see the arctic circle it's near to that it's located Okay, and LTE is known for Nautic Center for the IEC 64999. So this is our lab, that is IceCube lab. It's in a factory in a room. There you can see many systems which resides in our lab. One of the system which resides in our lab is called the processing station. And I just want to describe about this processing station because soon we are going to... watch a recorded demo video of how a agent can interact with this machine. So this processing station is consist of three mechatronic components and the first component is known as the rotating table and it has six positions and a complete cycle is achieved when the table rotates six times. And the other component is the tester component and it is located in the second position and it is used to do. test whether the workpiece has hole in it or not. And the third component is the drill component and we know that what the drilling station does it drills the workpiece and it is kept adjacent to the tester component. So here is the processing station which is there in our lab. So let me show the video of how it can be in how it's interacting with the system okay So I hope you can see. So this is the AI agent interface. So through this AI agent interface, we can communicate with the processing station with the natural language. So initially, I will be giving a command like I have positioned a workpiece on the position one and just want to drill the workpiece and bring back the workpiece to the position four. So what the AI agent does means it knows that this is a processing station and what it can do and it plans and describes the things which is needed to do for this particular task. So it explains the each step what to do and here we can see it rotates to the position 2 because in order to place this workpiece on the tester component and need to detect the hole using the hole detected skill that is action needs to be done and if there is a hole detected by the tester component then we don't need to drill it so we can skip and bring back the workpiece to the position which we asked for otherwise we need to do the drilling so that's what it plans and after the planning it executes the skill one by one the action the first one it is rotates to the tester component and check for whether it... whether it has a hole in it or not, and then it moves to the drilling station. It identifies that there is no hole, so it can start drilling when it's under the drill. And now it will move to the position three. That means it's under the drill, and you can see the drill, which moves inside the workpiece to drill it. and it brings back the work piece to the position which we are asked for. So that's what we can see in the first scenario. Okay, so that's done. So now let's see the second one. Yeah, okay. In the second scenario, I have just asked, I have placed the workpiece on position one, but I need to test the workpiece twice in order to check whether it has a hole in it or not, just a random one. So here we can see that it plans that what to do. That means it needs to rotate to the station where the tester is located, and it needs to test it, but not only once, it needs to test it twice because that's what we set by uh to the AA agent. So it does the checking twice in order to identify whether it has a hole in it or not. So it does that process and after that it identifies that there is no hole in it and so this task is completed and it's bringing back the workpiece to the positions which we told to the AA agent. So AA agent communicates with this machine and brings back the workpiece. piece to the desired position. Okay, so that's done. And so let's see how the AI agent interacts with this machine. What is the solution behind it? So here we are using 64099 control system. And this standard helps us to focus on modular event driven control system, and it enables dynamic reconfiguration of processes. So, processing station consists of mechatronic components and each mechatronic component has PLC and the distributor control system is designed using 64099 standard. And here we are using a skill-based model design pattern. That means we need to understand what is skill. Each skill represents a defined function. That can be drilling, welding, milling or maybe... detecting a hole that can be a skill for associated with the component. And these skills are known to the AI model that is LLM which we are using here. And what it does, whenever we ask for the new task, AI identifies and sequences necessary actions for the new task. And it also automates the process planning for different tasks which we are telling to the AI. The benefits of this approach are that approach is that we can customize your own agent to interact with the machine. and also we can develop test and new production sequences using aaa so yeah so let's dive deep into more technical themes like what is 64099 skill building block so i will give a first what is a skill that formal description of definition for the skill is that it's a skill represents a modular reusable function or operation that a control system can perform Each skill typically tied to a specific piece of equipment or a resource such as robot, sensor, or actuator and is designed to execute particular tasks. That means drilling which is associated to a drilling station and that can be considered as a skill. And here we can see a skill function block which is a 64099 function block and this function block is to What it is means, it has an ISA88 standard state machine inside it. It helps to develop a skill, just drag and drop the skill function block to our control system application and connect the necessary things to the control function. The control function can be drilling or detecting a hole skill or something like that. So this is our processing station. And we have already a 649-99 control system. What we need to do is like drag and drop the skill function block and connect to the test skill or hold it at skill or rotate skill, something like that, so it can do the necessary function. So here, the processing station has three components, and there is three skills. And each component is associated with their own skill. And it is possible to have several skills associated to one. component itself but here we are using only three and each and here we can see the table component is located it's associated with the rotate skill and what it means means means like rotate the table to one position clockwise that's the definition and hold it at skill it identify whether the workpiece has a hole in it or not and finally the drill skill it which drills the work piece so this is how it's designed and how we are going to execute this skill because now we have the control system and also the skills associated it's there so if we need to execute the skill to drill it or something like that or how to do it of course there is hmi is there to execute the skill by start and stop buttons it's associated with that because we have we are using Thank you. the skill function block. It has an HMI inbuilt for that to connect it with the respective skill. And it is also possible to execute the skills via OPC UA. That means using the OPC UA client, we can use the OPC UA variables in order to execute the skills. And HTTP is not enabled yet, but it is possible to integrate the HTTP protocol as well so that we can connect, we can execute the small actions on the machines used remotely. So how it works, it's what in a what's in the perspective the whole thing is given here that means we have a processing station skills associated with that and each skill can be executed via OPC UA. This information is known to the LLM so that it's possible that the LLM knows that what are the skills available and how to execute the skills using OPC UA. So if an user asks for something to do it, does the work piece, something like that, and what this LLM does means it searches for what are the skills available to this system and identifies what are the skills which is needed. to execute this skill, this task, and it formulates the plan and execute the skills one by one. So here we can see the 649AA agent can be used in the manufacturing marketplace. This AA agent has skills which we associated with that. We don't know how it works and we know how it communicates with the machine. this agent using OPC UA. The main thing is like this AI agent can reason. This is a react agent. That means it can reason and act based on that. That means consider a situation like which we have already discussed for the processing station. We need to drill the workpiece. Then it searches for the skills available. Then it identifies that there is a skill to detect the hole. So LLM identifies that this skill can be used. in order to identify that whether the workpiece has a hole in it or not. If there is a hole, then there is no need to drill it. And if there is no hole, then it can pass to the drilling station to drill it. How it is achieved? Because it plans everything and executes one skill by skill, but there is a response we can get from the... tester component or any component after doing a particular action. So if the tester component doing a testing thing whether it has a hole in it or not and it gives the feedback to the AA agent so that AA agent can get the feedback for each action and according to that it can dynamically respond to a recipe which we are asked to get it. But It's very strange to execute these things on the real system. Of course, it's possible to do it with the digital twin or simulation won't be harming any problem. But if it's in the real system, we can also introduce human in the loop concept. That means before executing each action on the machine, The AI agent will ask the user that whether should I execute this action on the machine or not. If the user says that, okay, execute it. Yes, if it's replaced with yes, then it executes it, the action. Otherwise, it won't execute it. So we can use this for the initial phase, how it reacts. the safe implementation can be done like that so now we will move on to the uh kind of a one digital twin and it's kind of a live demo which we are trying to execute here and before we are going to do we are experimenting with the distributing station that means a agent now interacts with this machine that is called the distributing station and This distributing station consists of two components, a rotating arm and a magazine. The rotating arm has four skills, that is, to go to the left skill. The arm can go to the left area or the arm can go to the right side. So there are two skills and the other two skills are to pick the workpiece, that can be another skill, and place the workpiece is another skill. And the magazine has two skills. One is push skill, which is used to push the workpiece to the rotating armament. And the second skill is called load skill. Whenever the magazine is empty, using the load skill, we can load the workpieces. So let's see a demo. So here is the station. Distributing Station, so skills via HMI. So I have moved the rotating arm to the left side. Now I can also move to the right side. So this is straightforward. So I can interact with these skills using HMI or, but it's also possible to execute via OPC UA. Now we can see, I will place the AI agent interface which is configured over here. Yeah, so I will place it a bit short. Yeah, okay. Yeah, so now you can see here this AI agent. I think it's possible to see it. Okay. Now, I will ask first, what are the skills available to you? So I can also get an understanding that what are the skills available to the system? So it explains that these are all the skills available to the distributing station. Okay, so I just want to execute one action that is a simple action, which I take move the arm to the left side or something like that. So what it does, it doesn't need to plan anything or something like that because it's a single action. It's straightforward. Nothing, no need to think much for the LLM because it's straightforward. one action. Once it's moved to the left side, we will get a response from that action that it's moved to left. So it says that it's moved to the left side. Likewise, we can also move to the right side. That is also a simple, straightforward step that it can move to the right side. And now we are composing these skills together in order to do it. bigger task. That means, which is one query which I have used, it's here. That is, pick the workpiece from the magazine and drop the workpiece, the work piece on the right side of the rotating arm. So that means it needs to do some of the things. It needs to do some actions. So let's see whether it works or not. So it plans what it what it needs to do. Push the workpiece from the magazine. First thing needs to do is to push the workpiece. And we need to move the rotating arm to the left side. That is the second action. And we need to pick the workpiece. That is the third action. And we need to move back this workpiece to the other end. And that's what we need to do. So let's see whether it's start to execute or not. Let's keep on running. Let me check. Yeah, yeah it's a bit delayed this process because of it's getting stuck because of the yeah maybe I will just restart it. then see whether it's possible to run it all the variables are okay yeah maybe i need to restart the hmi once again sometimes it has the issue with doing the things all together but most of the time it used to work yeah maybe sometimes it happens so that's how it's supposed to do it used to to try it and do the task one at a time and according to the feedback it needs to try it. So maybe I hope the EcoStruxure has had an issue. I think something is building the project. Maybe it's due to the problem of the EcoStruxure. It's getting stuck. I need to restart the EcoStruxure automation application once again. and needs to run it. Yeah, but it's okay. But I thought I think like that what I'm going to supposed to do, everybody got it what I'm trying to do. Okay, so let's move back to the presentation once again. So here we can see, now it's in the last phase. I am going to introduce an agent framework so that everybody here in the audience can create their own AA agent for their machine. That's what we are going to discuss in this phase. So definitely for the machine, if it needs to communicate with the AA agent which we are talking about, there should be a communication protocol that can be OPC UA, MQTT or HTTP anything. but some protocol needs to be there. But here we are using OPC UA because it's commonly used. But it's not only for the machines, we can also use the same thing for the digital twin or simulation. So define the OPC UA sensor and the actuator variable for your machine, and configure the AA agent with the OPC UA variables and methods. And test your AA agent with various workflows and deploy your AA agent for the machine. And If you deploy it, then it's like every machine will be having an AIA agent for that. And these AIA agents will work together using an agent orchestrator and we can build a multi-agent system. So here is how this AIA agent needs to be configured and how it works. So I have already shown the distributing station. And here is, I just need to... no need to think about any other complex things just need to represent your OPC UA variables that means sensors and actuators so this one has the left some sensors like whether it's a rotating arm on the right side or left side so we have a sensor and to move it we have actuator so those information should be available when the OPC UA server is running so our AA agent just connect with that and integrate all your variables and now play with the machine using the just interact with the machine using the commands which we can directly give using the natural language so maybe i will show the platform so that everybody can use it so here is the platform This is an agent for machines. I have set up. an OPCA server on the cloud for the testing purpose so that here we can just connect the button. So it tries to get the all the variables, OPCA variables or methods which is available. So here we can see the temperature, pressure, the motor speed, the motor state is to turn on on the motor and the valve position is to turn on the valve or not. These are two actuators and three temperature, pressure and the motor speed are the sensor values. So these are the available values which we can see over here. What we need to do is like just add this variable to the environment. So once it's added here we can see the temperature variable is added to the agent environment and we can describe if we didn't like the description over here which is the description which is already kept by the OPCI server which when we designed we can type something or add something and after that just integrate it so this will be integrated to the system and here we can see the system description that what what the agent should do so this is an industrial automation system control by OPCI server and the system consists of a sensor monitor some something which we the agent needs to do just give the system prompt uh this is this is up to us we can remove it or add it something according to our need uh yeah so what are the variables which you require just integrate it so we have integrated this one we have but it's up to us that what are the functions and what all the functions or the sensor values needs to be integrated and here uh yeah we can also we can integrate over here yeah then the while position this is the final one let me integrate this one as well yeah now it's integrated everything now we know that these are all the pca variables of the system and what we need to do is like test the agent So now we can test the agent like, yeah, okay. Now we have the configuration and everything. We can download the configuration. Deploy agent is not available yet, but it will be coming soon. So we can say that what is the system status or something like that. So I think for the system status, it needs to, yeah, it will get all the sensor values. This is the case. our motor is off. So let me try that turn on the motor. Simple command. So it's trying to turn on the motor and the motor has turned on successfully. We can save this kind of task if you are asked and we can save this workflow in the future. So that's the thing and we can verify that whether the motor has turned on by refreshing the variable. so this motor state zero will be turning to one yeah that starts to run and the motor speed is now like 205. likewise we can
- Speaker #0
turn it off or something turn on the valve or something like that turn off the motor likewise we can play around or we can also use make use of that different task connect together to create a flows workflows only that workflows can be saved and used in the later stage so now we can see it's turned off so maybe let me check yeah it's turned off and hopefully the motor speed will be reduced soon. But most of you guys will be working on this OPCAS server locally. So this website doesn't allow now to connect your local host directly, but it's possible to connect your TCP port via tunneling. using ng-rog or something like that which we can use. Otherwise if it's already in the cloud, if you have an OPCA server then it's completely fine. You can just give the connection string over here and connect it so it shows the variables it is need to connect it and you can configure it and create your agent and play with the agent like to do some tasks. So that's how it does. Definitely, it needs to be a public IP address so that we can access it. Otherwise, we need to tunnel it using ng-rock. Yeah, so that's how it works. So let's see the steps once again. Yeah, use your connection string. Connect it. There will be available variables or methods. We can see it. add to the environment, develop your own system prompt to it, and integrate it. And finally, if it's okay with the testing, which we are introducing soon for the most complex testing and evaluation, evaluating these agents, and once it's completed, it can be deployed to the cloud or somewhere so that you can interact with your machine using the agentic interface. So So this is the platform. So please customize your own agents using this one. And please scan this QR code and just explore how it works and give us the feedback that wants to improve. So that's it from my end. Thank you so much.
- Speaker #1
Thank you, Midun. Thank you. Let's go back to the stage. Yeah. Well, I haven't seen any questions so far. So let me ask you, you said that your AI agent is based on LLM, and this is where you do your prompt instructions to get your skills, let's say, programmed. And which LLM tool are you using behind, let's say, to pop the hood a bit? Did you make some comparison between different LLM tools to try to see what is working best and so on?
- Speaker #0
Yeah, for the time being we have tested a few of them. The GPT LLM 4.0 which we are using right now, 4.0 mini, that's fine. Some other, the Anthropic Cloud which is not seems to be up to the mark with comparison with the GPT for to reason this task and doing the things and yeah and also local LLMs also try but it's a bit it's not it's not even up to the mark and it's getting a lot of time to execute it so that could that's the one of the other reason and the main thing is like not all the LLMs won't be able to do it for the time being. but hopefully it will be like can be used with the local llms as well uh that's what we are aiming for so everybody can build their own agents locally and interact with that and
- Speaker #1
um Let's say that, okay, you go with one of the LLM. How do you do your feedback to the AI tool to say, okay, this is more this kind of way I want to program my skills? Or how is that done? Do you feedback directly with the function block? Or do you do like a description of the block, like the way you want it to actually be programmed and sent back to the tool? Or how is that working?
- Speaker #0
For the initial testing phase, now it's not integrated the feedback, but the feedback will be given like whether what we are doing, it's like as a reward to the LLM that if it's doing the right thing, then we will be giving the feedback. Then after that, if it's not the right thing, which we are asked to do. then we will be giving the penalties for the llm so that in that structured way we will we are trying to build the kind of a system like what are the good steps which we have done then we will be storing that as a knowledge base so that llm will be having the understanding of what are the tasks which gives the most rewards so based on that feedback kind of a system while we are testing and also evaluating what are the steps it can do for the safe operation of these machines on the shop floor or something that needs to be integrated but definitely that could be the part needs to be explored more to make it this age and safe in the industrial control system yes uh let me let me uh put back the the qr code that uh
- Speaker #1
that you gave us just to give people enough time to scan it and to try using the AI agents and make their feedback to you. I think that's quite important to make your research complete. What are we missing here? What is the next question that you'll try to tackle with these AI agents.
- Speaker #0
The next thing is like first we need to deploy this one after the testing. Definitely we need to deploy the agent for the machine for everybody. For me, it's OK to develop it by coding and creating the agent. But it's difficult for us, the industrial persons, to code it and make it right, working and deploying. And that's a bit difficult. So I need to make sure that they can test it. And once they are okay, they can make it deploy on the cloud or on their local premises. Once we have that, we will be having several AA agents for them. Because maybe most of them will be having the different like three or four agents will be there. Then next task will be like how to communicate. these agents work together for the shop floor or something so that means there will be a agent orchestrator that will be giving the task to each agent and there will be a hand offs between the agents like one will be doing after that it will be giving the task to the other one so it will be managed by the top level agent. This is the feature in the GPT.
- Speaker #1
two days ago they have released so i think now it's much more easier for us to integrate to this our environment as well okay great thank you thank you so um i still don't see uh many questions from the from the crowd so i would suggest that uh if you have any question just drop it in the comment of this uh of this video and we'll be happy to to to answer answer your question later Midoum, thank you for making it this afternoon. Again, how to contact you, your name and your email address. So please feel free if you have any other questions or you need another explanation and so on to contact Midoum for that. The video stays here and you can watch it again as much as you want. QR codes, we gave it to you so you can use the AI agent as soon as possible, which is now. Oh, I see that Chuo-Chan had a question. So one question, how could the AI agents resolve the issue of asentity? response for llm for example the skill expected the ai agent is motor on but the llm response was motor start or some cases in this way yeah yeah
- Speaker #0
definitely this llm agents have trained based on the basic knowledge of language so it has the capability that to differentiate between this one motor on or motor start. But if we need to specify what to do for the motor on or motor state, make it different. But then we need to try it on, we need to configure it on the AA agents at what to do for each particular thing. So it's up to us to decide that how much deeper which we need to try it. Otherwise, it's... might make it confused that's why we need to test it up to that's up to us to make it more reliable by giving more information to the agent that how it needs to perform great
- Speaker #1
answer jochen thank you for your question so i think that was the last one to show so me too thank you again yeah thank you i wish all of the viewers a very very good day And then talk to you soon and see you. Bye-bye.
- Speaker #0
Yeah, bye-bye.