- Speaker #0
Good morning, good afternoon everyone and welcome to this new edition of the UAO Goes Live. Second time this week and this time we are going to go to Austria, where we discover a bit more about tools and tool concepts about 64099. And I'm happy today to welcome on stage with me the professor Alois Söytl. Hi, Alois. Hello,
- Speaker #1
everybody. Hi.
- Speaker #0
Yeah, it's a pleasure to have you with us, because for sure you are one of the icons for 6499. So, sure, having you with us on stage, that's always a great pleasure. And, Alois, today you will tell us a bit more about tools and tools concepts, from what I understand. Can you tell us more about that?
- Speaker #1
Yeah, I'll try to give an overview of the research that we did over the last... five to six years on managing big and very big Forti99 applications and what infrastructure tools, concepts for tools can help us to make engineering of Forti99 solutions more efficient.
- Speaker #0
Yes, so quite interesting because as we go forward towards, let's say, application, those are the use cases that we are encountering more and more with Verilouch. very large application. So let's see what you have presented, what you want to show us today. And I give you the floor.
- Speaker #1
Thank you. So as I already said, I will talk a little bit about automation and I will start with the downer. I regularly say nowadays something is rotten in the state of automation. And the problem that we see is that we have a software problem. I don't know if you heard about this news story. It was in many, many different news outlets. It was even in newspapers here in Austria. So there was a school in the US, in Massachusetts. They had a problem and some glitch in their building automation system. It was a mixture of some sensor faults, which was then not very well covered by the automation system. I mean, in the end it was a software problem and they. came up with the problem that they had could not turn off any lights because the building automation system went into a failsafe mode and the failsafe mode was turn on all the lights for hold all the time and so they tried to to find three trace down the company so the religious company went bankrupt and was sold a few times in the end they could trace down the company and the result what they got as an answer for the problem they had a software problem the result was yes, replace all the Comfort hardware, let's read the whole automation. And that cannot be the answer. So if we have software problems, we need to get our software under control. We need to get better Comfort software, better Comfort software quality. Here with UAO, it's not surprising that I would say, yeah, 1499 is the solution. So I call 1499, so this is my personal name for 1499, a domain-specific modeling language for distributed process measurement and Comfort system. And the reason I call it domain-specific modeling language is it levels software development for industrial process measurement and control systems a level higher than the existing solutions we have. It introduces new elements, it makes development more efficient and what's getting more and more important, especially with all the adaptivity that we need that we have here, the distribution aspect as a core entity. In the setting that we are, I will not talk about what 1499 is, but I would like to tell you that we now have even measures. So friends of mine from Vienna University of Technology, Peter Selman, Martin Melik-McCoyans and Georg Schitter did this very nice work where they used code metrics that were developed to compare different solutions and different languages and they implemented. so classic sequential processes and continuous processes in 1499 and 1131 and compared them and compared the complexity. And what you can see here that the numbers for 1499 are most of the time much better than the 1131 numbers and this is a clear indication that we have with 1499 different concepts that are better suited for automation that help us to reduce the development. But please don't get me wrong, this should not be an 1131 bashing because 1499 heavily builds on 1131. So if you think I'm doing that, no, it's just we would like to get more efficient. And as I said in the introduction with Greg, I want to talk today a bit about what we did in order to leverage the potential of 4099. Because my hypothesis is for quite some time now that 4099 from the language perspective is very powerful. But how it's nicely said, a fool with a tool is still a fool. So we need to learn how to use the tool, how to make tools for 1499 to in the end get the best out of it. And what I brought here today is an overview of some stuff that we did. It's not all, but I thought I took the highlights of it. And I would like to start with control software architectures. This is something that's very important. We worked on how to structure 1499 application. One example, it's... from around 2018, 2017 is a collaboration together with VDMA. There are many different companies built this cell here. So this just fighting with my mouse. So where is it? Here it is. So this robot cell which was assembling fidget spinners which were famous at that time if you still remember. And what we came up with was a structure, a very classic automation structure where software follows the mechatronic structure of the machine. But with the difference that on each level we had a software component representing the component. This works especially well down here for the individual components of what grippers, axes, different actuators like vibration feeders, sorting cameras. And... In that design we offered the functionality via a vendor-neutral OPC UA interface to higher coordination layers. And here we had coordination activities that would coordinate the lower level entities. The advantage of that design pattern, and we could show that, is especially if you need to change lower level components, all other components are not affected. However, what we noticed also in a subsequent study is that if you need to change anything of the functionality up here, it can get very quickly very complicated. And so we came up with an idea where we said, okay, from skill orchestration to skill choreographies, where we say we have for the components still the coordination function blocks, we call them. hardware function blocks here for example for a cylinder and we have then so-called skill function blocks that we can then use to choreograph the functionality and to give an example how that could look like so here we have a capping station that is inserting caps or lids on top of small bottles and so Here, a classic sequence, if you model it in a sequence activity, this is an activity diagram. The colors map to the different functions here on the left. And by applying that design pattern that Lisa Sonleitner, the main author of the pattern, developed then and you rotate that by 90 degrees, you can very nicely represent the individual steps by function blocks. and can utilize the power of 1499 event connections to very visually represent the activities of that machine. Changes, adaptations, variations of that sequence came down here very nicely. And while we were doing these design patterns, we came up with a research question. How do we assess design patterns? How do we know that this design pattern is better than the previous design pattern? How do we assess... certain kind of qualities. And from that, a total new research topic here at the LIT Cyber-Physical Systems Lab emerged. And this new topic was about quality assessment of control software in general. So we focus a bit on 4099, but 4099 has so much 1131, therefore also for 1131. And we started with rather straightforward metrics like assessing the structural complexity. So this is one approach where we represented certain aspects of a 4099 basic function block in spider charts, which you can then use and compare to each other and say certain aspects can be utilized or can be assessed with that. The problem with this quality metrics is that they depend a lot on application domains and so there is no one single source of truth. So for one domain it could be that application function blocks may be better than in other domains. So here we are also looking for partners to say, okay, what would be interesting approaches? And one metric in that here is rather problematic, is the lines of code, which is very often used because it's a very simple measure. So if you If you have a function block with 1000 lines of structure text code in algorithms. you already get a feeling that this is maybe not correct. But that lines of code in itself is not an assessment of how complex the code is. So we investigated certain measures of investigating what kind of structured text elements are harder to understand than others. And with that, we did a survey where we showed, like you see here, an example. two ways of implementing function blocks and asked users what variants were better to understand. Could they understand the problem? Did they find issues with what were written in the code? And this resulted in something where we can go more to what's sometimes called cognitive complexity to tell how complicated is it for a software or a console engineer. to understand the code that was written. And as you know, most of the code is written once and read often, so understandability of code is something very important. This brought us to the next level which is bad smells in 1499. Bad smells is a term that was originally coined to indicate that we have here code. that is not super nice, that is not tidy, that could lead later to some issues. However, it's not wrong code. So we are not talking about that this code is broken. So this is perfectly correct code. Your machine runs perfectly fine. But that smells is code that may lead later to problems, especially under the investigation of maintainability. long-term maintenance or reusability. And you see here a word cloud of all the different bad smells that were already identified in industry. So the big ones is the code, there is Smith's Army Knife, Spaghetti Code, shotgun surgery is also a very nice one, cut and paste programming that unfortunately is still a lot around. You see here certain examples with our first work identified. And one work that I personally like a lot is the topic of feature envy. Feature envy is a kind of measure for badly structured code. So you see here in my figure two different modules that are interacting with each other. On the left side, you see we have here four interactions. On the right side, when you would restructure the same code in different modules, you would only have one interaction between the different orders. And this is something that's very controversial because structuring code, grouping elements together is something that is very often targeted not only by just numbers but also about what humans understand. Nonetheless, Lisa Sonnleit again did that work. found two potential ways and mapped that to 4099. One is the feature entry factor that shows how well a certain function block or a certain module is structured and how much it's connected to the outside. And then the other one is the so-called distance metric. The distance metric is even nicer because it gives a connection-wise distance between two blocks. And the distance metric allows even to give hints. So you could give hints to your users saying, if you would move the block into another module, your overall structure would be with less connections and maybe therefore more maintainable. We took that distance metric and in the bachelor thesis of Philip Bauer, we took this concept. You see here a sample of a 1499 application structured in certain hierarchies. And what he did is he flattened everything out. So all the hierarchies are gone. and based on the feature and refactor and the distance metric with an with an optimization algorithm came up with a potential solution where we minimize the number of interconnections between the modules. So as you can see on the right side, it looks much tidier than on the left side. The big drawback of that kind of optimization is that it's done without the user. So it follows only the logical connections and loses all the information about what was grouped together, maybe because of functionality, maybe because of mechatronic structure. So therefore from our experience talking with potential users, you have to take these results with a grain of salt, thinking about how to to utilize these results. Nonetheless, we got very positive feedback from first beta users investigating, telling, oh maybe a little bit restructuring could help us to get more maintainable because less connections make it easier to exchange, make it easier to to replace. One other problem in as bad smell or general is conceived as problem is duplicated code. So we had worked on identifying clones. So this would be one on one copies. This is especially for bigger applications, mostly a computational problem because we have lots of lots of lots of function blocks and connections and you have to do lots of comparison operations. We came up with a very efficient way. of finding clones in bigger applications. What I brought here is even a step further. What you see here is a graph representation of a rather big 1499 system. You see here certain isolated parts. So I don't have the exact numbers, but we are talking here with a few thousand function blocks and a few hundred thousand connections. And what you don't see is that we have here similar structures. So all the circles with the same color are duplicated code. And in that sizes of function blocks, you don't really see that easily. And so Mr. Unterteichler did that as part of our work in the Christian Doppler Lab basics that I mentioned here. So what we did is we turned our 1499 applications in graphs. We did similarity embeddings with graph neural networks and feature embeddings and do then some projection and clustering, so what nowadays are called classical AI algorithms. And the output of that algorithms are these nice graphs showing similarity. So here you see similarity groups and some thresholds. I personally like this kind of figure a little bit more, although that looks more fancy. And what you see here is all the circles that you see here are different modules of our graph here. And if two circles are very near together, it means that these are similar. And so if you take your circle from here and down here, it would be something completely different. Something like this here or this here or the cluster up here is something that I as maintainer of that application would look into and see what are similarities and differences. Can I group stuff in new types so that I get more usability? When the people showed me the first results, they were not convinced and they showed me some similarity things. And I found this quite nice because as a comfort engineer, I could immediately see what the similarities are and whatnot. Having tools like that can really help to find duplicated code, find similar code, which can later be refactored, reworked to to help us getting better maintain a brickwork. We work also on language extensions. What Greg did not say, since 2008 I'm also in the 4099 standardization. Since 2015 I'm somehow the convener of the 4099 working group and as part of that work I'm also interested what are the limits and problems of 4099. So if you find these kind of things, I would be happy if you drop me an email or send me some kind of message so that I can get that. And we work on language extensions or investigating certain boundaries of the language. And one thing that is something that is investigated by different researchers is about can we provide more information for tools and users. on how to use function blocks. One approach would be so-called interface contracts where you specify in some way the intended use of your function block. One way like shown here could be the service sequences of 1499 and we investigated can the service sequences automatically be translated into some formal methods to check if the interface is correctly implemented, if an interface is correctly used. And this worked quite nice for the very limited use case. We are currently looking forward to get this to another stage. But also what we could show is that service sequences in its current form have limitations and definitely need more. And that led us to More research done by Bianca Wiesmeyer, she investigated in her PhD thesis, what can we do to better help users of 4099 to either specify the behavior or assess the behavior of 4099 elements. And if you look here, these are what currently is offered by 4099. So we have execution environments, we have function blocks, we have sub-application composite function blocks and applications. We can specify some way with service sequences the component behavior, but to be honest everything that is added to that picture is somehow missing. So we don't have a very good way of specifying behavior of networks execution environment which would be really needed to do that. As a first step to get here further we started with interpreting function block models, which is a new approach to addressing 4099 in tools. So what we developed here is a function block run, we call it runtime, although it's not a very good name for that because we're directly interpreting the application that you develop in your tool. without the need of any additional software. So you can assess the tool, you can assess, sorry, not the tool, you can assess your function blocks, your ECCs, your function block networks inside of the tool and having all the freedom and possibilities that you have on a powerful PC or laptop. And you don't need to do all these classical things like I develop my function blocks, I develop my application, I compile and deploy to the runtime. I start the runtime, I watch what's going on in the runtime, and then go back and improve. So you can have a very short round trip. And from that, what we can get from that is that we first of all can utilize that for comparing implementations. So we already use that in work together with Luleå University on comparing execution behavior of different runtime environments. We can do platform independent testing. So on that level we can already develop unit tests and integration tests for function blocks and function block networks and get code coverage, how much of your 499 code is covered by the test. We can even execute partial models. So even half of your function block is ready, we can already investigate tests. evaluate that without that you have to fully implement that this greatly improves uh development round trip times understanding of your code so we got very positive feedback on that uh we can test without any limitations with more nearly no limitations except from tool infrastructures so we can do much more on that. I already talked about round trip times and language engineering is something for that. So coming from the more generic things also to tools and I already said we need great tools and if you look out there not only our new target audience we heard it in the beginning the workforce is changing our users are changing But not only that, what we need to leverage and assess the power of 4099, we need complete new tools. We need tools that are more powerful, that know a lot about what you want to do. And 4099 knows about a lot of things and tools that utilize that power can help. And what we are trying to not only develop theoretical concepts, we are also developing tools and tool infrastructures. One thing that we are working on and where we show many of the things that I have shown you today is available as part of the Eclipse Foyer project. So you find here the link. If you google it you can download it. And there we show certain concepts of 1499. Also we are investigating additional new features but also investigating tools. advanced tool concepts like the assessment of code quality and also other things that I would like to show you later now. So one thing that we noticed for especially very large projects with lots of elements is that you are very often limited to certain levels of complexity that you can see. Either you get very big sheets with lots of function blocks or you structure them in some applications and composite function blocks. But this in German we have this saying of about trees and woods and you don't see the wood because of all the trees. So you are totally lost in the deep. And Lisa Sonleitner together with Philip Bauer developed the concept where we try to show these big applications and all the connections what's going on in a dedicated visualization. So what you see here is an output of a sample application that we developed to test the tool. Here you see 8000 function blocks and 45,000 connections between these function blocks. And this is not a view to develop. This is a view to get an overview, to assess, to visualize. So we have also concepts to enhance the view now with metrics like code quality issues. or we can visualize the direction of connections. So it is a lot of stuff. So you see here lots of things are coming out of that point, going in different directions here. So it would be interesting if it's bidirectional, unidirectional. So we have already additional concepts. But developing tools has a big problem because tools have users. And there was a very nice keynote two years ago by Brian Sage. It had the title, What's Wrong with Users? And in the end, it's very important to note, users are never wrong. That the users just don't understand what you thought. And then it's maybe a problem how you develop the tools, what tool concepts you use. And therefore, we already spent quite some effort also in how can we help tool developers to build better tools by assessing their usefulness with developers and users. and this is quite interesting um this concept was developed by originally by my colleague rick raphisel and we applied this concept and other concepts to different usability studies and the the core part is not only assessing it with users so this would be the right side the light blue side but also the medium blue side here which is also involve your developers and your tool concept developers in the assessment of the tool. And one thing that I would like to especially point out here is this cognitive dimensions assessment. Cognitive dimensions assessment is based on an assessment developed by people from UK. I can look that up if anyone is interested in that.
- Speaker #0
was developed to help tool developers as well as language developers to assess how hard it may be for users to use it. And in this cognitive dimension there are certain assessment methodologies like how much do a user need to remember or one of my favorites is pre-major commitment. So pre-major commitment means do I need already at the beginning decide which path I take or do I get different ways of assessment or if I made a mistake how easy it is to repair or how is it easy to change and this drove a lot of thinking when we developed future tool concepts and that brought us up to one of the first things that we noticed is that regularly because of different people developing libraries, function blocks, applications we ended up with inconsistencies and errors. And you see here some examples that could help, like someone deleted a type, renamed the type, removed the pin, changed data types. So these are very prominent things that regularly happen. Tools can help us to protect, but sometimes you need to do these steps in order to further develop your application. And if you would protect the user from doing that, it would mean that there is no chance to go to the final result the user would like to have. And this is going into pre-major commitments, so if the user is not doing the right steps in the right order, they will never come to the right outcome. And with what we are doing here with handling inconsistencies, that we say we accept that a 1499 model may be broken. It can be that pins are missing, it can be that certain stuff is not here. from all the information that we have in our data, we show the user as much information as possible to allow to repair that. So that it's first of all readable again. So previous tools, I know some tools, also Viver, like that broken models were just not loaded and you were lost. Everything was gone or you had to manually in these prototyping tools repair that in the XML files. This is not something a user wants to do. We came up with this concept of classical error markers, like if you would have a compiler error, if you have a wrong C++ or Java program, then also the compiler tells you here you need to fix something. So we are not currently also investigating can we fix stuff automatically, but very often we see from the trust point users would like to repair or get informed about what is So therefore, with that, we can show users, look, someone did something that was not okay, or it was intentional, but you now need to repair something. And repairing, changing code can be hard, and therefore we also have a PhD thesis ongoing, which is looking into refactoring operations for Fortinet. So refactoring is changing code structures. that make the software easier to understand, better to maintain, without changing functionality. And sometimes, or what we are investigating here is what options of classical refactoring operations are available that we can apply to 1499 and implement it. So you see here an example screenshot where a function block, the child block, is just moved out of the of the containing super, all the connections are correctly updated, all the interfaces are correctly updated and I just say move to parent or move somewhere else and I don't need to take care about all the steps that I would need. And that's something which really relieves the users. And we did a study, so how we can define refactoring operations even independent of the language, so You see here an example of Simulink. So there was some work on Simulink on doing that kind of work. So we learned something from Simulink. But also we came up with currently 52 refactoring operations, not all again implemented, but many of them you can find out in 4DiG IDE. And we also came up, found refactoring operations that were originally that we developed for 2099, which we saw could be implemented also in other languages. something for people who are implementing different languages, different kinds. Yeah, and with that, I'm at the end of my overview. I hope it was enjoyable. The classic question in the end, are we there yet? Unfortunately not, but still we are a big step further, so I hope you find that interesting. We would also like to get your feedback on certain features, how to can improve it, things that you may want in our investigation. and therefore I'm very happy to have that been able to present. And Greg also said it would be nice to announce this thing here. So after more than six years, we are now again offering the four days of Eclipse 40-Hour Winter School. It will be in Linz here in... somewhere around where I'm currently sitting in the LIT Open Innovation Center of Johannes Kepler University. And this winter school will be all about 4099, Eclipse 4Diag, Eclipse 4Diag at UAO runtime, OPC UA, and anything that you make out of it. So if you find that interesting, drop us a message on the 4Diag webpage. page you find more information of what already is available. Registration will open soon. So I hope to see many of you there. And with that, I'm at the end of the talk. And as now every variant has to be something with CheXGPD. I have here a poem in the style of Goethe on 49.9. And with that, I'm ready for your question.
- Speaker #1
Thank you Alois. Thank you for... this great presentation and all the work which has been done. In fact, on the topic, that's always interesting to have a view of where people are thinking to go, especially in terms of research, and as well to see the possible application behind. I see that we have two questions already. So let me just show them directly on the screen. So, excellent research work from Cosanda. What is cyclomatic complexity? Please explain more. So this came quite at the beginning of your presentation.
- Speaker #0
Yeah. Cyclomatic complexity is a complexity measure introduced quite some years ago, the cyclometric complexity assigns all the elements in your code a certain kind of complexity and adds that up. So if you have if statements, so if you have nesting, if you have operands like plus minus multiplication function calls, these all add to a certain kind of complexity and the more you have, the bigger the complexity is. Cyclomatic complexity is something that was introduced to compare and assess also how much cognitive effort it is to understand. In the meantime, cyclomatic complexity got not the biggest reputation and people are now going towards more cognitive complexity. Cognitive complexity is most... famous presented and I think it was also developed by Sona. So this famous code analysis company. And they describe it a little bit more. They are focusing more on nesting and hierarchies which tell how much cognitive load this kind of code has on your brain. I hope that helps. It does not make sense that they bring up the formulas, but roughly in that direction.
- Speaker #1
Good, thank you Alois. Next question from the same viewer. When you measured the complexity of the code, did you focus on the complexity for programming or complexity for the user for troubleshooting?
- Speaker #0
This is a hard one. I would say we mostly measure the complexity of the code that is there. So it's mostly for the people that later need to read and understand the code. I'm not sure if there is any measure that would be on the complexity of developing the code, because developing the code requires some additional things. So you need to first understand the problem, break the problem down in steps, and then write it down. And I think for most of the development tasks, the first steps that are not resulting in code, but understanding the problem are normally the biggest complexity and they are independent of the code. I'm not sure if there exists a measure for measuring how much effort it is. So there is a measure for how much effort it is to type. This is especially when you compare Python with Pascal or BASIC. So this depends on how much. you have to type, but here modern tools with code completion and code assists can help. So I would say definitely we focus on the resulting complexity of working with the code later on. Because in the end, especially in automation systems, when this code is maybe around for quite some time, definitely the effort is on maintaining reading understanding. And there are studies out there that also for normal code. that more time is spent on reading and understanding code than writing the code.
- Speaker #1
Good, good. I see one last question came. Are you suggesting to use machine learning to identify repeat patterns and assess their need as well as reusability? If so, how practical is it to integrate the tool into EAE build? So I think two questions in one. Let's start with the first one.
- Speaker #0
I like the question. This is a really nice one. We are definitely strongly investigating different kinds of, I would say, AI algorithms. I'm not sure, machine learning. It's a different story. This could be recommender systems. I think that would be a total topic by itself. But yes, we are, I definitely see that we live in a world where lots of copy paste modified code is done. And so repeating patterns is a thing we have a dedicated research project that I was allowed to present today in the morning in a UAO session. We are being focused on variability. and how different codes implementations are similar or not. And 1499, because of the mixture of textual in the algorithms and graphic and with the graphical programming languages had need different ways of doing this assessment. So a pure textual is not working very well. A pure graphic is not very well and therefore a mixture using graph neural net. networks and similar things is definitely something. We just had a paper last week on that topic. Annalena Hager, one of our AI students is in that direction. And then we could investigate saying maybe then the algorithms better understand what our code is doing. We could go into the next step and going towards recommender systems. like I see that you do the first certain thing you already have a block in your library. Would you like to use the library block or not? But I don't know how fast we're getting there and if that's possible. For integrating into EE build our tools so that that AI or that graph assessment tools are currently separate tools they just work on the 1499 XML. So this would be independent of any tool. And I currently have no idea how well they scale. So maybe these kind of tools would work better, like in build and integration tools, where you have a build server, where every night your whole code base is analyzed, and then you get dashboards and can investigate. From personal experience and also studies, I know that the quicker the feedback to the user the better the code quality but That's something which is hard to say. Maybe then EAE gets it connected to the build server and shows problematic code with error markers or something like that. This could be something to be investigated. I'm also happy to discuss with EAE build developers about collaborations on that.
- Speaker #1
Yes, sounds great.
- Speaker #0
Great way I wrote.
- Speaker #1
Yeah. that is more and more coming anyway so uh that was that was it for the for the questions for for today we had a very great and and very detailed session so that's that's that's very interesting uh for the viewers here you can see the details of of of uh aloes directly written on the screen so uh feel free to reach out i think as well his uh linkedin page is linked to the event, so don't hesitate to just reach out to him if you have questions. any additional question. And for the rest, we'll close the session for today. So thank you for being with us. And then see you soon to the next session of UAO Goes Live.