Today NVIDIA announced that it will provide the transportation industry with access to its NVIDIA DRIVE deep neural networks (DNNs) for autonomous vehicle development. “NVIDIA DRIVE has become a de facto standard for AV development, used broadly by automakers, truck manufacturers, robotaxi companies, software companies and universities. Now, NVIDIA is providing access of its pre-trained AI models and training code to AV developers. Using a suite of NVIDIA AI tools, the ecosystem can freely extend and customize the models to increase the robustness and capabilities of their self-driving systems.”
NVIDIA Provides Transportation Industry Access to its DNNs for Autonomous Vehicles
Reflections on Deep Learning, DNNs, and AI on Wall Street
In this special guest feature, Bob Fletcher from Verne Global reflects on the recent HPC and AI on Wall Street conference. “Almost every organization at the event talked about their use of machine learning and some indicated what would make them extend it into full-scale deep learning. The most important criteria were the appropriateness of the DNN training techniques.”
Intel FPGAs Power Realtime AI in the Azure cloud
At the Microsoft Build conference held this week, Microsoft announced Azure Machine Learning Hardware Accelerated Models powered by Project Brainwave integrated with the Microsoft Azure Machine Learning SDK. In this configuration, customers gain access to industry-leading artificial intelligence inferencing performance for their models using Azure’s large-scale deployments of Intel FPGA (field programmable gate array) technology. “With today’s announcement, customers can now utilize Intel’s FPGA and Intel Xeon technologies to use Microsoft’s stream of AI breakthroughs on both the cloud and the edge.”
Video: Demystifying Parallel and Distributed Deep Learning
Torsten Hoefler from (ETH) Zürich gave this talk at the 2018 Swiss HPC Conference. “Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this talk, we describe the problem from a theoretical perspective, followed by approaches for its parallelization.”