Report from Vision Show in Stuttgart Last Month

November’s Vision show in Stuttgart felt like a key moment of change for the industry. Four big themes came to fruition at the event, at least two of which felt like science fiction at the last Vision show in 2016. In fact, the show’s strapline, ‘Be Visionary’ has never felt so appropriate. Here, Stephen Hayes, managing director of Beckhoff Automation UK, explores the top trends to emerge from Vision 2018.

 

With Beckhoff showcasing its TwinCAT Vision product at SPS Drives, now seems like a good time to explain the way the product fits into the themes that dominated the Vision show. In addition to the wider manufacturing landscape, which is still ruled by Industry 4.0.

Vision 2018’s biggest themes were hyperspectral imaging, deep learning, autonomous machine vision (AMV) and embedded vision. The irony is that the objective of all these things, which sound pretty high-tech and high-concept, is to make things easier for the manufacturer applying the technology.

When Steve Jobs was explaining the simplicity of the early Apple operating systems, he said, “Most people have no concept of how an automatic transmission works, yet they know how to drive a car.” In the same way, the core technology themes of Vision were focussed on ensuring that vision system users don’t need to understand the complexity of the device to reap its benefits.

 

Hyperspectral imaging

The goal of hyperspectral imaging is to obtain the radiance, or light intensity, for each pixel in the image of a scene creating a large series of continuous spectral bands, with the purpose of finding objects, identifying materials, or detecting processes. Historically the drawbacks have been a lack computing power and sensor cost, but both issues have become less problematic as technology has advanced.

One of the businesses at Vision, a Finnish firm called Specim, is delivering hyperspectral solutions across the food, waste and recycling industries, improving accuracy and liability in detection of defects and parts. But this is just one of the ways that the vision market is transcending science fiction and moving into science fact.

 

Deep learning

Deep Learning allows industrial visions systems to be applied using a simple training process, removing the huge amount of effort and expertise that was traditionally associated with the integration process. Instead of months of crafting algorithms deep learning replaces this idea with simple training of the system.

There was barely a stand at Vision 2018 which didn’t proudly proclaim that it had ‘inbuilt deep learning’ or ‘advanced AI’. But for those who have truly seized the baton, this will be a genuine game changer.

 

Automated machine vision

AMV takes the concept of deep learning several steps forward. Here, the core idea is to deliver a single product, which incorporates camera, lenses, lighting, software and hardware, allowing an internal integrator or quality assurance manager to install the system in less than an hour. Inbuilt deep learning and AI allow the system to adapt to its environment and be moved around a plant to cope with changes in production.

 

Embedded vision

Like AMV, embedded vision also seeks to make life easier for the internal and external machine vision communities.

In manufacturing and process control, most embedded vision applications take the form of a fanless PC, fitted with sensor to grab the image, an interface to pass the image data to a processor and software algorithms to analyse the image data as well as the peripheral I/O control electronics to interface to the outside world.

The drawback of embedded vision, as well as smart cameras, has always been its cost and the space it takes up on the shop floor. In contrast, it delivers substantially improved processing performance, compared to a traditional, integrated system.

 

TwinCAT Vision

Building on Beckhoff’s core principle, of building all automation functions into a central platform, TwinCAT Vision incorporates PLC, motion control, robotics, high-end measurement technology, IoT and HMI. This simplifies engineering significantly by allowing camera configuration and programming tasks to be carried out in the familiar PLC environment. In addition, all control functions related to image processing can be synchronised in the runtime system precisely and in real time.

Because the product is device agnostic, it could work with hyperspectral and AMV devices, smart cameras or traditional vision system to deliver the best result for the manufacturer. And, while some of the artificial intelligence and deep learning on show at Vision felt like science fiction made science fact, the reality of the future is that machine vision will be about elegant simplicity that benefits every tier of manufacturer. It could well be that the future of vision is nobody knowing how a mechanical transmission works.