Whitepaper

At this moment, global industry is facing an exciting time and with Tollenaar Industries we are in the middle of it. The (r)evolution of industry 4.0 is in full swing, driven by a market with continually rising expectations and tasks which ask for increasingly complex solutions. Highly variating products are being produced in smaller series and within ever shorter time. In addition quality- and price level are under constant pressure of global competition.

Smart Industry

In answer to this, suppliers come up with clever solutions based on automation, robotizing and digitising – Smart Industries. Flexible automation is the key to meet with ever growing, unforeseeable customers’ expectations. This requires robots capable of performing more than pre-programmed tasks, individually and autonomously. These intelligent robots must be able to process different products in unstructured environments, without the usual long changeover times.

The Holy Grail

Of crucial importance is that these robots are not only capable of performing the desired movements and grab products, they also have to ‘see’ the products within their environment to process them correctly. In addition to motion control and grippers, robots need vision. The ability of sight allows the robot to pick e.g. unsorted products out of a box or pallet. This technique called bin-picking is the ‘Holy Grail’ for flexible automation.

Robot vision

The technique of bin-picking and other forms of flexible production which require 3D-vision is called Vision Guided Robotics. Based on our vision of the future of industrial automation, Tollenaar Industries started a new company: Teqram. Teqram has a strong focus on process automation, bin-picking and Vision Guided Robotics. In order to do this, we have developed our own control system, software platform and robot system.

Robotics

Robot Teqram

The rise of robots

The phenomenon “robots” has been around for almost 100 years and gained fame among the general public through science fiction films. The first industrial robots appeared in the sixties and were very basic robots, used in the automotive industry. Around these years, space travel came to light. Robots were soon used for remote-controlled (tele-operated) operations in space stations and for autonomous vehicles on the moon.

Today, robots are more common in modern households (vacuum cleaners), in the medical world (surgical operation robots) and are seen as a possible solution to the shortage in healthcare.

Programmable machine

A robot is a programmable machine, able to perform tasks with the help of various sensors and actuators. The shape of a robot ranges from a static manipulator (robot arm), a mobile version on wheels up to humanoid robots, complete with arms, legs, head and a ‘brain’ for which artificial intelligence (AI) is being developed. Often the robot is provided with one or more grippers to manipulate objects.

Appliance within the industry

In today’s industry, robots are being used for a wide range of product handling. In the past, such tasks were done by humans. The shift from human to robots can be explained based on three factors:

  • Ergonomic
    Tasks are dirty or physically very demanding. This also counts for proceedings which are very dull and repetitive to perform.
  • Quality
    There is no such thing as a ‘bad day’ for a robot. A robot is able to work 24/7 with a high and constant accuracy and is able to maintain a required quality standard regardless the amount of products.
  • Economic
    Product process automation also comes with economic benefits, such as a reduction in labour costs or as a solution for shortage of qualified personnel.

In addition to robot arms, e.g. used for welding automation in the automotive industry, another well-known type of robot is the Delta Robot, which is able to perform pick & place operations such as packaging food and non-food products.

Design

When it comes to the design of industrial robots the ‘degree of freedom’ is a guiding factor. Each robot has their own amount of degrees of freedom, motion and / or rotations (also referred to as axes), which determine the range of motion of a certain point. For example this could be the part that connects a gripper to the robot arm.

The degrees of freedom are, each individually or combined, driven by actuators, which are often electric motors. Positioning accuracy, speed and force are all determined by robot control.

Controls

Bin-PickingThe robot is controlled through a combination of software and information collected by a range of sensors. These sensors gather information such as positions and orientation of the robot arm, but also measures for instance the force a gripper exerts.
The control system is able to alternate actions based on this information, after which the sensors are able to show the (new) results of that certain action. We call this feedback based control. In many cases (visual) information is indispensable to control objects to be manipulated. An example of this is bin picking of unsorted products out of a box.

Vision

Machine vision

Image information can be obtained through vision with the use of a digital camera (a camera with an image sensor based on pixels) and then analysed with image processing software. To determine the shape and orientation of objects, visible light is sufficient. Of course, other parts of the electromagnetic spectrum can also provide valuable information. This technique is known as ‘computer vision’ and within industrial applications it’s called ‘machine vision’.

Camera

Camera EasyeyeThe image sensor, with a certain amount of pixels, determine the resolution of the camera: the number of pixels per mm2. In combination with the optics, the length and width of the object or the amount of detail of the image can be made up. The optics, a combination of lenses, aperture, focus and so on, determine the focal depth of the camera: how close and how far sharp shots are possible. With one camera, only one shot can be taken of a stationary object, a flattened image of one view.

3D-Vision

To create a three-dimensional image, one or more cameras are required, which together form a 3D vision system.

3D-VisionIf the dimensions of the object to be imaged are not known, the exact position cannot be derived from the images. Additional information about the distance to the object is necessary. This can be done with the ‘time-of-flight’ method, which measures how long it takes for transmitted LED or laser light to return after reflecting on the object. In time-of-flight cameras this measurement is integrated: in addition to length and width, image depth (distance) can be determined. Another way to generate 3D vision is done by triangulation: a laser performs a line scan on the object while it moves, allowing a camera to perceive the laser line in different angles with which a contour of the object to be calculated.

To combine line scans, a reconstruction of the object in the form of cloud of points will be created. Instead of the object, there’s also an option in which the camera + scanner can move.

Capturing images

EasyFlexibleFrameworkA vision system includes a frame grabber next to the camera, which ensures that recordings will be made and stored in the random access memory (RAM). This will be done with a certain frame rate (images per second) which typically is a few dozen. Often the frame grabber is able to process the images already such as compressing. Normally all raw data, including information of all pixels, are stored. However, because of ever limited storage and processing capacity – the images are immediately compressed, excluding all irrelevant information. Compression formats can be e.g. MPEG2 and JPEG.

Image analysis

Various software tools are available which contain algorithms for image processing. Often the first step is filtering the image, to suppress noise, sharpen or correct contrast for specific lighting conditions. Surface properties of the object, such as roughness or texture, can affect the image, because they influence the reflection of light. Colour can also be filtered, for example to distinguish certain parts in the image.

The next step is to recognize features of the object. For example, ‘edge detection’, is an important feature to determine the shape. Pattern recognition is also a commonly used feature. Specific patterns or shapes such as barcodes or QR codes are detectable and individual tasks can be assigned.

Eventually recognition of an object, its exact position and orientation can take place by comparing the object with reference images. In case of quality control, a further approval / rejection takes place based on the deviation between references and measurements.

Speed is crucial for real-time processing of images. Most of the speed is gained through computing power and memory capacity of the processors and in speed and structure of the algorithms.

System integration

Specification

The implementation of a vision guided robotics system starts with problem definition and specification.

The most important elements in the specification are:

  • Product properties
    Mass, shape, surface structure, structure / texture, dimensions, tolerances, etc.
  • Configuration of current production line
    What services have to be handled by the robot?
  • Environment
    Available space, light, dust etc.
  • Processing speed
  • Fault tolerance / max. error rate
Vision guided robotics

In addition, the implementation process also includes specifications in terms of budget, time schedule, available internal project capacity and more.

Design

Based on all specifications, a blueprint is made of the complete system. The output of image processing (position and orientation of an object in case of bin picking) is used as input for robot control (control of axles and grippers).

Where possible – and with costs, lead time and reliability in mind – existing components (robots, cameras, other sensors, control / processing units, software) are selected. The main objective is to integrate proven technology into a well-functioning system. Many robot suppliers offer vision systems which can be integrated on their robots.

Installation

The final phase is installation and programming. The integrated system design is built and adjusted to the actual application and production environment.

The hardware and the control software are installed and calibrated where necessary. The image processing software is parametrized into the exact conditions under which the recordings have to be made. If necessary, the system must first be taught in, using already-known, precisely defined test situations. If the results have been validated based on the specifications, the system can be delivered.

Problems and solutions

Safety

Object ScannenAs with any other robot applications, safety is an important point of attention with vision guided robotics. Traditionally, industrial robots were placed behind a fence and would automatically switch off when the gate was opened or when faced other disruptions. The trend is that industrial robots operate ever more in close proximity of humans and even collaborate with them. E.g. handling heavy objects. The robots have to be intrinsically safe: their controls have to react to the signals received from sensors that are able to detect unwanted interaction with a person.

Think of force sensors, but also of vision. In vision guided robotics, the current vision system can play a role in this, but additional vision is probably needed to monitor a wider environment. The advantage in any case is that experience with the use of vision is present.

Environmental conditions

Other issues with vision guided robotics are the environmental conditions under which the robot and vision system have to function. A large variation (for example the degree to which daylight has a direct effect) can have substantial consequences for the vision system.

In many industrial environments, lighting is relatively poor and therefore illumination of the object to be measured is not sufficient. This complicates the analysis of the images. In addition, often there is pollution in industrial environments, for example as a result of previous processing of products (chips, oil film, etc.). This also doesn’t make image processing any easier. The better the environmental conditions are controlled, the higher the chance of a successful implementation of vision.

Robust Software

The causes mentioned, make the robustness of the image processing software a big challenge.

It starts – in addition to the choice of the appropriate hardware – with the selection of the right software package, suitable the regarding application. Subsequently, programming the vision application has to be done very careful. This involves, among other things, determining the correct threshold levels (discrimination levels) while filtering the images. This will strip the images of noise, giving them more ‘clarity’ without any relevant information being discarded. The parameters in the software must also be set in such way that variations in environmental conditions can be processed by the vision system.

Investment level

Industriéle robotIn addition to technical challenges, other types of barriers can impede the successful introduction of vision guided robotics. Psychologically this concerns the resistance to change and possibly the resistance to manual work replaced by automation.
A large investment is required, both in terms of time and money, which is normally not necessary for the continuation of (low-productive) ‘business as usual’ with handwork. Even the costs of software should not be underestimated at investments: besides license costs for common packages, it’s mainly the effort of programming and teaching in, the entire system requires for a specific application.

Recapitulation and future

Applications

Vision guided robotics can be used in various industrial operations. Examples of metal and plastic product automation are the loading and unloading of machines, tool changing on the same machines, welding or soldering workpieces, press bending sheet material, and so on. As mentioned earlier, bin picking is considered to be the ‘holy grail’: the ability to select and pick completely randomly sorted products at high speed. Such applications ask for an intelligent vision guided robotics system.

Benefits

The advantages of vision guided robotics are multiple. The technique is a perfect solution when human labour is not suitable, when there is a shortage of qualified personnel or when it’s simply too expensive. The flexibility in the production process increases because multiple products can be processed with one robot program; after all, the system can recognize and process different products with universal grippers. So less tooling is needed; if desired, grippers can be switched quickly.

Flexible automation with vision guided robotics also makes it easier to integrate multiple production steps on a single workstation. All this increases productivity. Better insight into product and production also offers new possibilities for quality control; think of visual input and / or output control. This greatly increases product quality.

Future

In the near future, vision guided robotics will find more applications within the industry. Ever more production steps will be automated and vision will become indispensable to ensure good functionality. Of course, hardware innovation will take place and as a result the systems will perform better and will be suitable for even more applications.

MagneetgripperBut the most important innovations will nevertheless be in the area of software and big data, the technology for processing large amounts of data, such as image processing. Add to this the progress in the field of autonomous vehicles, which can literally increase the reach of vision guided robotics.

In short, the future for vision guided robotics is looking bright. This especially applies to companies working with this technology.