- Creating a Unique Electronic Musical Instrument: The Sound WallPosted 2 days ago
- Building a Laser MicroscopePosted 2 days ago
- Grand Piano Keys with ArduinoPosted 5 days ago
- Wireless Power TransferPosted 6 days ago
- Robot Punchers with ArduinoPosted 7 days ago
- A minimal 3D-printed scalePosted 1 week ago
- Expanding the pins of a microcontrollerPosted 2 weeks ago
- Let’s create a small level with a matrix displayPosted 2 weeks ago
- ChatGPT: Writing Code with Artificial IntelligencePosted 2 weeks ago
- Free Webinar: Arduino IoT Cloud and ESP32 DemoboardPosted 2 weeks ago
Nvidia Deep Learning Accelerator Will Be Integrated into Arm’s Machine Learning Platform
Nvidia and Arm have announced their new partnership to bring deep learning inferencing to the billions of mobile, consumer electronics, and internet of things devices, just few days ago.
Based on Nvidia’s Xavier AI chip, the autonomous machine system on a chip, Nvidia Deep Learning Accelerator (NVDLA) is a free and open architecture to promote a standard way of designing deep learning inference accelerators. Arm and Nvidia aim to integrate NVDLA architecture into Arm’s Project Trillium platform for machine learning.
The partnership will make it simple for IoT chip companies to integrate AI into their designs and help put intelligent, affordable products into the hands of billions of consumers worldwide.
“This is a win/win for IoT, mobile and embedded chip companies looking to design accelerated AI inferencing solutions,” said Karl Freund, lead analyst for deep learning at Moor Insights & Strategy. “NVIDIA is the clear leader in ML training and Arm is the leader in IoT end points, so it makes a lot of sense for them to partner on IP.”
The integration of NVDLA with Project Trillium will give deep learning developers high levels of performance as they leverage Arm’s flexibility and scalability across a wide range of IoT devices.