New Security Empower Machine Vision Functions

Mission-critical Machine Vision in an Insecure IoT World

We are on the threshold of the next industrial revolution where machine vision will be the major game-changer, as intelligent vision can now even incorporate deep-learning algorithms. These enable cooperative work environments between humans and machines or machine vision that is part of critical-control feedback loops. And these algorithms are most efficiently executed on heterogeneous system architectures.


  • Page 1 of 1
    Bookmark and Share

Article Media

Machine vision moving to “sense-plan-act’”

In early applications, machine vision was used with frame grabbers and Digital Signal Processors (DSP). Today, with the development of reasonably priced high performance sensors - one of three major enablers for the new robotics revolution - we can see examples of applications in which recognition is not simply just a means of identifying well-known schematics in a ‘sense-compare-decide’ manner. Today, robotics – starting with simple stationary systems right up to autonomous vehicles - are transforming towards more sophisticated ‘sense-plan-act’ behavior. In this respect, a vision system is the most powerful eye of a robot which informs it of its position and its environment. And the computing power of Heterogeneous System Architecture-based embedded processors like the AMD G-series SoC provides the brain that understands and interprets the environment. The second enabler is the processor which delivers the required high performance with moderate power consumption. The final part of a smart robot is the act component. Acting robots require high power density in the batteries and high efficiency motors. So state-of-the-art batteries and BLDC (Brushless DC motors) are enabler number three. The combination of all these three enablers, i.e., their enhanced technologies, makes vision systems and robotics so revolutionary today.

New intelligent vision systems

So let’s take a closer look at the vision part of this industrial revolution. Human eyes are connected via nerves to the ‘visual cortex’ in our brain. Out of our five senses, the visual cortex accounts for the largest section of the brain. Machine vision systems, such as the IVS-70 (Figure 1) based on parallel computing offered by heterogeneous SoCs, are the enablers of Artificial Visual Cortex for machine vision systems. Their eyes are lenses and optical sensors. Their optic nerves to the Artificial Visual Cortex are high speed connections between the sensors and the compute units. These systems not only provide high speed and high resolution to compete with our human vision, they also provide accurate spatial information on where landmarks or objects are located. To achieve this, stereoscopic vision is the natural choice. Industrial applications for this type of stereoscopic vision system can be found, for example, in item-picking from unsorted bins. Mounted on a robot arm, a vision system can carry out ‘visual servoing’ with 50 fps and identify the most suitable item to pick at the same time the gripper of the robot arm is approaching the bin. This makes scanning - which can take a couple of seconds – and reprogramming the robot arm superfluous. Autonomous cars are another obvious application for vision technologies, as well as a whole range of domestic robot applications.

The artificial visual cortex

So how does this process work in detail? The first stages of information handling are strictly localized to each pixel, and are therefore executed in a FPGA. Common to all machine vision is the fact that color cameras think in RGB (and the pixels are Red, Green and Blue) just like the human eye, but this method is not suitable for accurately calculating an image. Thus, firstly RGB has to be transferred into HIS (Hue, Saturation and Intensity). Rectifying the image to compensate for distortion in the lenses is the next necessary step. Following this, stereo matching can be performed between the two cameras. These steps are executed within an FPGA that is seconding the x86 core processor. All the following calculations are application-specific and best executed on the integrated, highly flexible programmable x86 processor platform which has to fulfill quite challenging tasks to understand and interpret the content of a picture.

To understand how complex these tasks are, it is necessary to understand that interpreting picture content is extremely complex for software programmers and that, until recently, the human visual cortex has been superior to computer technology. These days, however, technological advancements are, quite literally, changing the game: An excellent example of computer technology improvement is Google’s AlphaGo computer which managed to beat the world’s best Go player. (Figure 2) And this was achieved by executing neural network algorithms. Today such algorithms can be executed much faster compared to the nineties. Recent methods use also even more layers in building the neural networks and today the term deep-learning means a neural network with many more layers than were used previously. Plus, the heterogeneous system architecture of modern SoCs allows deep-learning algorithms to be used efficiently (e.g. with the Deep Learning Framework Caffe from Berkley).

Figure 2
Modern computer vision and machine learning systems using x86 processors can analyze each pixel.

x86 technology is also interesting for intelligent stereoscopic machine vision systems due to its optimized streaming and vector instructions developed over a long period of time and very extensive and mature software ecosystem, vision system algorithms and driver base. Plus, new initiatives like Shared Virtual Memory (SVM) and the Heterogeneous System Architecture (HSA) now offer an additional important companion technology to x86 systems by increasing the raw throughput capacities needed for intelligent machine vision.

HSA enables efficient use of all resources

With the introduction of latest generation AMD SoCs, a hardware ecosystem is now in place which accelerates artificial intelligence algorithms in distributed, highly integrated sensor logic. Thus, software developers can now also take advantage of a powerful processing component that has been sitting on the sidelines and woefully underused – the graphics processor. (Figure 3)

Figure 3
HSA provides a unified view of fundamental computing elements, allowing a programmer to write applications that seamlessly integrate CPUs with GPUs while benefiting from the best attributes of each.

In fact, the graphics processor can accomplish parallel compute-intensive processing tasks far more efficiently than the CPU, which is important for increased parallel computational loads. The key to all this is the availability of Heterogeneous System Architecture, which in terms of x86 technology has mainly been driven by AMD but has also been joined by many industry leaders. HSA supporting microarchitectures seamlessly combine the specialized capabilities of the CPU, GPU and various other processing elements onto a single chip – the Accelerated Processing Unit (APU). By harnessing the untapped potential of the GPU, HSA promises to not only boost performance – but deliver new levels of performance (and performance-per-watt) that will fundamentally transform the way we interact with our devices. With HSA, the programming is also simplified, using open standards tools like MATLAB® or OpenCL/OpenCV libraries. 

The AMD G-series System-on-Chip (SoC) perfectly matches all the points discussed above. It offers HSA combining x86 architecture with powerful GPU, PCIe and a wealth of I/Os. On top of this, AMD G-Series SoCs have an additional benefit, which is not at all common but extremely important for the growing demands of application safety: an extreme high radiation resistance for highest data integrity:

Guaranteed data integrity is one of the most important preconditions to meet the highest reliability and safety requirements. Every single calculation and autonomous decision depends on this. So, it is crucial that, for example, data stored in the RAM is protected against corruption and that calculations in the CPU and GPU are carried out conforming to code. Errors, however, can happen due to so-called Single Events. These are caused by the background neutron radiation which is always present and originates when high energy particles from the sun and deep space hit the earth’s upper atmosphere and generate a flood of secondary isotropic neutrons all the way down to ground or sea level.

The Single Event probability at sea level is between 10-8 to 10-2 upsets per device hour for commonly used electronics. This means that within every 100 hours one single event could potentially lead to unwanted, jeopardizing behavior. This is where the AMD embedded G-Series SoCs provides the highest level of radiation resistance and, therefore, safety. Tests performed by NASA Goddard Space Flight Center  (note 1) showed that the AMD G-Series SoCs can tolerate a total ionizing radiation dose of 17 Mrad (Si). This surpasses the requirements by far, when comparing it to current maximum permissible values: For humans, 400 rad in a week is lethal. In standard space programs usually components are required to withstand 300 krad. Even a space mission to Jupiter would only require a resistance against 1 Mrad. Additionally, AMD supports advanced error correction memory (ECC RAM) which is a further crucial feature to correct data errors in the memory caused by Single Events. (Figure 4)

Figure 4
Susceptibility of common electronics for the background neutron radiation cross-section Single Event Ratio (Upset/device*hour). In order to compare different technologies, the SER values have been normalized to a size of 1 GByte for each relevant technology.


Note 1: Kenneth A. Label et al, “Advanced Micro Devices (AMD) Processor: Radiation Test Results”, NASA Electronic Parts and Packaging Program Electronics Technology Workshop, MD, June 11-12, 2013