- How to Build a 1000W ZVS Induction Heater Using a Resonant RLC CircuitPosted 3 weeks ago
- How to Adjust X and Y Axis Scale in Arduino Serial Plotter (No Extra Software Needed)Posted 10 months ago
- Elettronici Entusiasti: Inspiring Makers at Maker Faire Rome 2024Posted 10 months ago
- makeITcircular 2024 content launched – Part of Maker Faire Rome 2024Posted 1 year ago
- Application For Maker Faire Rome 2024: Deadline June 20thPosted 1 year ago
- Building a 3D Digital Clock with ArduinoPosted 2 years ago
- Creating a controller for Minecraft with realistic body movements using ArduinoPosted 2 years ago
- Snowflake with ArduinoPosted 2 years ago
- Holographic Christmas TreePosted 2 years ago
- Segstick: Build Your Own Self-Balancing Vehicle in Just 2 Days with ArduinoPosted 2 years ago
Nvidia Open Sourced His Deep Learning Accelerator (DLA) Module
This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. That module, the DLA for deep learning accelerator, is somewhat analogous to Apple’s neural engine.
Nvidia plans to start shipping it next year in a chip built into a new version of its Drive PX computer for self-driving cars, which Toyota plans to use in its autonomous-vehicle program.
Assuming that “We cannot address all the markets out there”, Deepu Talla, Nvidia’s vice president for autonomous machines, says he wants to help AI chips reach more markets than Nvidia can accommodate itself.
While his unit works to put the DLA in cars, robots, and drones, he expects others to build chips that put it into diverse markets ranging from security cameras to kitchen gadgets to medical devices.
Creating a web of companies building on its chip designs would also help Nvidia undermine efforts by rivals to market AI chips and create their ecosystems around them.
However, not everybody liked this news. In a tweet this week, one Intel engineer called Nvidia’s open source tactic a “devastating blow” to startups working on deep learning chips.
What do you think about it?