From input to output: Machine Vision explained
It seems simple: a robot moving randomly placed objects from one pallet to another. A camera registering the different parts and by means of clever software decides what to do with these parts. This is in short form the basics of Machine Vision. However, to set all this in motion there is a lot of data that needs to be analysed. The principle of Machine Vision can be broken down into four steps: Gathering the required information such as images, Processing information out of images, Analysing the information and finally Communicating the next steps to the machine or software.
1: Gathering the required information
An essential part of collecting images is the right camera. A camera captures the images and acts as the input device for the machine. The use of a high quality camera is crucial for processing the images at a later stage. With a low resolution camera you may face problems such as surface reflection on metal objects, shadows, light drops, motion or poor resolution.
The easyEye® camera of Teqram tackles all these problems using a lens with 10x optical zoom and freedom of movement up to 270 ° wide and 150 ° high. Images are recorded at high speed and stored in high resolution.
2: Processing information out of images
The next step is about discovering the information needed out of the images. For example, a machine vision system must be able to decide the right location, dimensions, shapes, colors and reachability of objects placed on a pallet – all from an image. In general, a robot would stop working when any of the above factors would change.
By using the technique developed by Teqram however, a robot will be capable of bin-picking metal objects for instance. The robot continues to work regardless of product dimensions or changing environments. Currently, this development is rapidly evolving and the big challenge lies in transferring artificial intelligence (AI) and deeplearning technologies to applications in industrial environments.
3. Analysing processed information out of images
With the right products and objects being identified, the next step is to analyse the collected data. Can this part be picked up? How many force does it need to be picked up? Is this the right product? Could this part be stacked onto another part? Does this part meet the requirements? In order to answer such questions, Teqram relies on smart and modular software for deeplearning algorithms; our easyFlexibleFramework. For instance, this software is capable of comparing identified objects to CAD / CAM models in real-time, to check for variations and to confirm that the correct product is selected.
4. Communicating data to the machine
After analyzing the data, a next step in the process, such as external software or a robot arm, needs to know what it should do with the object. In most cases this will be moving the object to another location, such as on a pallet or on a machine. An important development currently being implemented in this step is optimizing processing time. A robot with machine vision must be able to repeat the same task endlessly and as quickly as possible. The software technology developed by Teqram is capable of doing this. A robot arm equipped with our easyEye® vision system and easyFF software is, thanks to deeplearning and self-learning algorithms, capable of performing independent tasks faster and more efficiently.