
SMARTHANDLE aims to investigate and develop technologies that can effectively support the European industry. Production lines, both manual and automated, need to progress towards the goal of producing a greater variety of products with less resources. This requires addressing key challenges such as product variations, insufficient autonomous reasoning and adaptable control, and inefficient planning systems.
As manual labor becomes more expensive and skilled workers are increasingly scarce in Europe, factories seek to automate as many production processes as possible. Shop floor space must be used efficiently, processes should ideally run 24/7 and the often expansive classical feeder systems should be replaced by pick-from-bin processes. Many of these processes require Vision Guided Robotics (VGR).
The smart coupling of advanced machine learning algorithms and classical image processing methodologies is key to efficiently advancing vision-based automation solutions. Even in the field of industrial production, where automation is very well advanced when it comes to standardized, repetitive tasks, accurate placement remains a challenge. If anything changes with regard to the items to be handled (e.g. position, packaging, colour,…), the processes or the production environment require complex adjustments of the automation solutions.
In short: Today’s robots are not smart enough for the next level of Industry 4.0. In order to support flexible automation, robots must be able to reliably detect and locate objects and human collaborators and varying illumination, work pieces type and locations, as the engineering of individual solutions is often costly and typically does not scale.
3D robot vision is key to achieving the increased flexibility that users across the various domains are looking for. Enriched with Applied AI methods, more complex automation problems are solved at minimum shop floor usage and with an increased flexibility. Multiple processing pipelines can be configured and easily calibrated on the robot system. Machine learning approaches reduce parameterization effort significantly to help to fulfill the requirements in classical industrial automation as well as e.g. lab automation, agile production, or logistics.
In order to minimize the effort on the user’s side, some of today’s systems are designed for an intuitive use even by non-experts. Machine learning helps reduce the parameters when deploying a robot vision application in the factory, reducing the integration cost massively. In a next step, the minimization or even elimination of the on-site training effort is tackled: Fully in line with the current trend of stepping away from big data (purely characterized by the sheer amount it comes in) and turning towards good data (data that is “defined consistently, covers the important cases, has timely feedback from production data, and is sized appropriately”*), synthetic data is used for simulating processes ahead of time rather than training live on the shop floor.
The generation of such synthetic data based on model data results in highly realistic ground-truth training data sets for machine learning. With it, robotic depalletizing, singulation, or bin picking applications for known or unknown objects are implemented with a minimized on-site training time.
Training data sets for object localization generated from model data for known objects include ground truth data, which is needed for robotic pick and especially place operations. There is no manual labelling and pose estimation on images required. These two tasks require the majority of (mostly manual) effort in data preparation before the training data set can be processed. In addition, it is prone to human error.
Synthetic data does not need to be labelled and can be generated offsite and fully automatically. Sensor, light and object characteristics are also modelled, so a fully automatic training for pose estimation can be achieved. The pictures show various simulations of objects in different environments. Synthetic images can be supplemented with real data if needed, without any modification of the training process in order to make the data set even more realistic.
As a partner in SMARTHANDLE, Roboception focuses on VGR and the use of advanced machine learning algorithms and classical image processing methodologies to create more flexible automation solutions. Roboception aims to minimize on-site training time by using synthetic data generated from model data, resulting in highly realistic ground-truth training data sets for machine learning. This approach helps to reduce the cost and effort involved in data preparation, which is prone to human error, ultimately supporting users to implement flexible automation solutions.