Skip to content Skip to footer

Why there is no magic wand to deploying AI and Machine Learning

Electrical Review Logo

Karl Walker, Market Development Manager at Beckhoff, explains why AI is an evolutionary process and how Machine Learning can be profitably deployed.

New technologies are often treated with extreme viewpoints; people either have reservations because of a lack of experience or knowledge, or they are so excited that they want to use them to master all insufficiently tackled or failed past challenges.

Artificial Intelligence (AI) is the most general set of approaches that enables a machine or process to simulate human behaviour and decision-making. Machine Learning (ML) is a subset of AI which trains a model from past data and, once taught, the model can make inferences about current data, without being explicitly programmed. The ultimate goal of AI is to create a complete system to solve complex problems in a human-like manner.

In the world of automation controllers for machinery, manufacturing processes or buildings, ML is starting to make an appearance. However, not every application will have an optimal solution from the outset; ML is an evolutionary process, where newly relevant data needs to be identified and evaluated as the process continues to run, allowing the ML model to be continuously improved.

A collection of experts

A Machine Learning project requires teamwork and the knowledge of different experts. The team leader is typically the ‘domain’ expert, the person who understands the process as a whole and understands the parameters which affect that process. This person generally identifies the challenge they want to master using ML. The data scientist, who focuses mainly on the data analysis, must define the essential process parameters that play a role in reaching the goal. A data scientist alone, with no feedback to/from the domain expert, cannot operate adequately. Additionally, other individuals may have niche insights into specific events or conditions which result in anomalies in the process that could, for instance, impact product quality. All of this data is vital.

Machine Learning is always based on sample data that is used to teach a ‘model’. During the training phase, the training techniques differ mainly by whether the training data is ‘labelled’ or ‘unlabelled’. If the data is labelled it is easily possible to identify examples of an output that is expected as a result of a given input during the training, for example, the training is based on concrete examples and outcomes. If the data is unlabelled, the output information is missing, and the algorithm is limited to finding internal abstract relationships. This is where ML is used to fill the gaps, by detecting anomalies resulting from a set of input conditions.

Workflow: from data to the AI model

The fundamental idea with ML is to no longer follow the classic engineering route of designing solutions for specific tasks and then turning these solutions into algorithms, rather to enable the desired algorithms to be learned from model data instead.

The task of data collection generally falls within the realm of automation specialists. They know the control architecture, the general conditions on the shop floor, and are optimally equipped with the tools to carry out their work efficiently and in-line with the needs of the situation.

Models are trained based on the supplied data in frameworks such as PyTorch, TensorFlow, SciKit-Learn, etc. These frameworks, which are established in the data science community, are generally open source and can therefore be used free of charge. Maximum flexibility is therefore assured and no limits are set in the case of an interdisciplinary project between automation engineers and data scientists – neither within the company nor across company boundaries. Each team member can work in their own familiar environment.

These ‘trained’ ML models can then be simply exported from the ML framework into the standardised Open Neural Network Exchange format (ONNX) and handed over to the ML execution software where they are deployed to optimise the control process, whilst continuously using larger and newer data sets to further improve the process. For instance, Beckhoff’s TF3800 Machine Learning Inference Engine and TF3810 Neural Network Inference Engine software libraries for its PC-based control systems execute the ‘trained’ model data in real-time for such tasks as product anomaly detection, quality prediction and automatic machine or process parameter tuning.

ML models have the property of improving through training on larger sets of data. Likewise, general conditions can change gradually or spontaneously when the machine is being operated. To take account of this, you can update your trained ML models during the life of the machine: so without stopping the machine, without recompilation, and at the same time completely remotely via the standard IT infrastructure.

There is no ‘magic wand’ when it comes to AI. A deep understanding of the process and the parameters affecting the outputs of that process, must be fully understood first.

Karl Walker

Top Stories

Electrical Review is the go-to source for electrical engineers, with more than 150 years of dedication to the industry.


© SJP Business Media.