Thursday, January 7, 2016

CES 2016: a new platform Drive PX 2 from Nvidia, artificial intelligence in vehicles (video) – wirtualnemedia.pl

Vehicles equipped with the new computing platform Nvidia Drive PX 2 with the functions of teaching and supercomputing have to recognize their surroundings and autonomously navigate your way.

Platform Drive PX 2 is designed to solve complex problems, typical development of autonomous vehicles, by means of artificial intelligence. It uses advanced graphics processors that enable its use of deep learning offering full control (360 degrees) of the environment of the vehicle and the calculation of safe and comfortable trajectory of driving.

Built for an open environment, and partners with industry automotive platform for the development of deep learning systems Drive PX 2, provides a huge amount of computing power, equal to the capacity of 150 MacBook Pro. Two next-generation Tegra processors and two dedicated graphics processors based on the new architecture Pascal can make a total of 24 trillion operations per second deep learning, accelerating calculations used during the learning process of neural networks. That’s over ten times more than the previous generation product.

To ensure Nvidia PX 2 Drive system can quickly learn to cope with unexpected situations encountered every day on the way – obstacles nieuważnymi drivers or road works. Deep teaching should check also in difficult weather conditions and lighting, which fail conventional techniques of image recognition, ie for example. In the rain, snow and fog, as well as when driving at sunrise or sunset, or in total darkness.

When floating-point operations, the graphics processor used in solving Drive PX2 can process up to 8 trillion operations per second. This is over four times more than the previous generation product. These improvements partners have access to a range of algorithms for use in autonomous vehicles, including multiple sensors, functions, locations, and plan routes. The system also provides the ability to perform precise calculations necessary for the processing of network layers deep learning.

The autonomous vehicles recognize their surroundings by using a series of sensors. Drive PX 2 can simultaneously process signals from 12 sensors – video cameras, lidar, radar and ultrasonic sensors. With their help the vehicle can detect and recognize objects on the route, calculate its position relative to the vehicle and the surrounding area, and then calculate the optimal and safe route. This is possible via a set of tools, libraries and modules Nvidia DriveWorks, which accelerates the processes of testing autonomous vehicles. DriveWorks supports the calibration of sensors, recording and processing of data from the surroundings (360 degrees) and synchronization of information processing streaming data using a complex network algorithms by both specialized and general processor platform Drive PX 2. Nvidia provides software modules designed to handle every aspect of the process autonomous driving, from detecting objects by classification and segmentation, the vehicle location on the map and route planning.

The new solution for deep learning of Nvidia consists of a package of Nvidia Digits used for training deep neural networks and platforms Drive PX 2 to be the implementation of the network in the vehicle. Digits is a tool for development, training and visualization of deep neural networks that can operate under the control of any system equipped with NVIDIA GPUs, from personal computers and supercomputers, and solutions Amazon Web Services and equipment compliant with the Open Rack, such as the recently announced System Big Sur from Facebook. Trained neural network model runs on an Nvidia Drive PX device 2 built-in vehicle.

Nvidia says that since the launch of the first generation Drive PX, which took place last summer, more than 50 car manufacturers, developers and research institutes began developing autonomous vehicles using this solutions, including Audi, BMW, Daimler and Ford.

The platform Nvidia Drive PX 2 will be widely available in the fourth quarter of 2016. Partners working on the development platform will get access to it in the second quarter of 2016.

Author: km

More information Nvidia, artificial intelligence, CES 2016

LikeTweet

No comments:

Post a Comment