Endpoint AI is the new frontier of AI, sinking AI capabilities to edge devices. It brings an innovative approach to data management: collect relevant data locally and make decisions locally. It upgrades IoT devices that were only used to calculate data in the past to smart devices that integrate artificial intelligence, thereby increasing the ability of devices to make real-time decisions.

Author: Karol Saja

Endpoint AI is the new frontier of AI, sinking AI capabilities to edge devices. It brings an innovative approach to data management: collect relevant data locally and make decisions locally. It upgrades IoT devices that were only used to calculate data in the past to smart devices that integrate artificial intelligence, thereby increasing the ability of devices to make real-time decisions. All in all is to bring intelligent decision-making based on machine learning physically closer to the data source itself. So embedded vision sinks down to the endpoint device and is no longer just about breaking down an image or video into pixels, but understanding the pixels, understanding what they mean, and making informed decisions when certain events occur.

What is Embedded Vision?

Embedded computer vision gives machines vision enabling them to better understand their environment with the support of machine learning and deep learning algorithms. There are applications that rely on computer vision in many industries, and it is already an indispensable member of black technology. Precisely, computer vision is part of the field of artificial intelligence (AI) that enables machines to extract meaningful information from digital multimedia information sources, and then take action or make decision recommendations accordingly. Computer vision is similar to human vision, but there are still some differences between the two. Behind human vision is the ability to comprehend all the different things we see. Computer vision, on the other hand, can only recognize what it has been trained on, with a certain error rate. On the other hand, embedded vision allows the device to recognize specific objects in the shortest time through training, so that it can analyze massive images more efficiently. In this respect, machine vision is superior to human vision.

Embedded vision is widely used in smart terminals in consumer and industrial fields, adding value to devices. A few simple examples: analyzing the quality of products on a production line, counting the number of people in a crowd, identifying objects, analyzing the content of a specific area, etc.

When the endpoint device implements embedded vision applications, the computing power of the device will be a challenge. However, with centralized processing, the amount of data transmitted from the sensor device to the cloud for analysis can be very large and exceed the network bandwidth. For example, a 1920 x 1080 camera running at 30 FPS (frames per second) might produce about 190 MB/S of data. In addition to privacy concerns, the round-trip of data from the edge to the cloud and from the cloud to the endpoint is bound to introduce latency. Neither of these limitations is conducive to real-time applications.

IoT security is also a concern for the market to adopt and develop embedded vision. A key concern in using smart vision devices is the potential for inappropriate use of sensitive images and videos. Unauthorized access to cameras not only violates privacy, but can lead to more serious consequences.

AI Vision on Endpoint Devices

Endpoint AI understands captured images

Endpoint AI uses machine learning and deep learning to match and recognize patterns that are trained

For optimal performance, AI algorithms run on end devices without transferring data to the cloud. The data is captured by an image recognition device and then processed and analyzed in the same device.

The power consumption constraints of endpoint devices still exist, and the microcontroller or microprocessor needs to be more efficient to handle the large number of multiply-accumulate (MAC) operations required by AI algorithms.

Deployment of AI Vision

There are countless use cases for AI vision applications in the real world. Below are some examples where Renesas can provide comprehensive MCU and MPU solutions including the necessary software and tools to enable rapid development.

Smart access control:

Voice and face recognition bring more use value to security access control systems. However, real-time recognition requires embedded systems with very high computing power and on-chip hardware acceleration. To meet this challenge, Renesas provides MCUs or MPUs with high computing power and also integrates many functions essential to support face and speech recognition, such as built-in H.265 hardware decoding, 2D/3D graphics acceleration, and internal and external ECC on memory to eliminate soft errors and enable high-speed video processing.

industrial control:

Embedded vision can be applied to multiple scenarios including safety operations, automation, product classification, and more. Artificial intelligence can help perform multiple operations in the production process, such as packaging and distribution, ensuring quality and safety at all stages of the production process.

Transportation:

Computer vision can also improve transportation services. Take autonomous driving as an example, using computer vision to detect and classify objects on the road. It can also be used to create 3D maps and estimate motion trajectories. Self-driving cars use cameras and sensors to collect information about the environment, then use vision techniques such as pattern recognition, feature extraction, and object tracking to parse the data and make the most appropriate response.

Embedded vision can be used for a variety of purposes, but they all need to be tailored and optimized for a specific domain, with special training on datasets in that domain. Such as monitoring a physical area, identifying intrusions, detecting crowd density, counting the number of people or designated objects or animals, finding people, finding cars based on license plate numbers, motion detection, and human behavior analysis.

Case Study: Crop Pest Detection

Visual AI and deep learning can be used to detect a variety of anomalies – plant pest detection is an example. The findings suggest that computer vision can provide better, more accurate, faster, and more economical solutions than previous methods that were expensive and labor-intensive.

The methods and steps used in this case can be applied to any other assay, there are three main steps:

The first step is performed on a laboratory computer, and the second step is deployed on an endpoint device, such as a node device on a farm. The result in step 3 is displayed on the screen of the client. The diagram below shows the general flow.

in conclusion:

We are experiencing a revolution in high-performance intelligent vision applications across multiple market segments. The increasing computing power of microcontrollers and microprocessors in endpoint devices presents huge opportunities for new vision applications. Renesas Vision AI solutions help you enhance overall system functionality by delivering embedded AI technology with intelligent data processing at the endpoint. Our image processing solutions for edge devices feature low power consumption and support multi-model, multi-feature inference. Start developing your vision AI applications today with Renesas Electronics products and tools.

The Links:   NL6448BC33-49 G154I1-L01

Exit mobile version