List, a research institute of CEA Tech in Grenoble, France, focused on smart digital systems, will demonstrate an algorithm in a new category of artificial intelligence (AI) called multi-task deep learning, at CES 2018 during January 9-12, 2018.
DeepManta, a flexible algorithm for a wide range of applications, will demonstrate visual object recognition for smart cities, such as identifying vehicles, their type and position and counting them. In addition, Valeo, the global supplier of advanced automotive technology that is partnering with List, will demonstrate DeepManta’s support for autonomous driving.
List’s demonstration includes a video stream captured by a stationary camera and displayed live on a screen. Miniature cars and other objects move into the camera’s field of view, where the AI will selectively recognize them. When a car is recognized, the algorithm generates a visual annotation, labeling the car with the logo of the brand and model information, and enclosing it with 2D and 3D boxes to locate it spatially in the video in real-time.
In addition to automotive applications, the algorithm’s automated perception capacity opens up new services with significant social and business impacts. These range from guidance for blind people to video surveillance or aspect control of products on manufacturing lines.
“DeepManta delivers one of the promises of AI: providing assistance to users by automatizing and parallelizing tasks that normally would require their full attention,” said Stéphane David, industrial partnership manager at List. “It excels at each individual task, but requires much less overall memory and processing power than parallel architectures that use one algorithm per task.”
The result of more than 10 years of research at List, DeepManta is a multi-task deep neural-network algorithm developed to perform advanced and efficient real-time analysis of video streams. The native multi-task architecture combined with enhancements to conventional deep-learning algorithms powers a system capable of extracting different types and levels of information simultaneously and in real-time.
A standard video camera connects to a laptop equipped with a powerful GPU. The video feed is processed by the algorithm running on the laptop and the result of the analysis, including incrustations, is broadcast with a very low latency onto the screen, providing an efficient and autonomous system with all the necessary resources performing live.
See demonstrations by Leti, List and Liten (institutes of CEA Tech) at the CEA Tech Village, booth 50653 in Eureka Park.