Back to all articles
Intel's Dedicated Deep Learning Chips are Out of this World
Published: 4-21-2021
Deep learning software is already changing the world with amazing new applications, now Intel is lighting a fire under the industry with its latest dedicated Deep Learning hardware accelerator - Spring Crest

Deep learning is not a new concept, but it’s certainly a computing technology that’s finding a lot of applications these days. If you didn’t know, Deep Learning is a segment of machine learning, which is in turn a part of the larger field of artificial intelligence. Deep learning uses simulated neural networks to solve computing problems that are typically too hard or inefficient to solve using traditional programming methods. The virtual neural net learns solutions to tough problems such as recognizing and reconstructing faces. Which is how “Deepfake” videos are made. It’s an incredibly powerful approach, but also one that’s murder on traditional computing hardware. Deep learning thrives on massively parallel processor architecture, but even beefy GPUs need several days to process something like a Deepfake video.
Hardware makers such as Nvidia have begun to include hardware-level deep learning optimizations, such as in the latest RTX GPUs. That technology is already finding a place in data centers where AI applications can really stretch their legs.
Now Intel is throwing down the gauntlet with the Spring Crest deep learning accelerator. An incredibly impressive piece of hardware that works to improve deep learning performance.
Deep Numbers
The number “27 billion” is huge no matter how you slice it. It’s hard to imagine 27 billion of anything. Try it. Try imagining 27 billion rubber ducks. Can you?
So the fact that this chips sports 27 billion transistors is amazing. For comparison, the RTX Titan “only” has 18.6 billion transistors and that thing is a beast. The proper name for the Spring Crest-based chip is the Nervana Neural Network Processor for Training. Shortened to NNP-T.
It’s a 24-core unit with a total die are of 688 square millimeters. Again, for comparison, the Titan RTX chip with just over eight billion fewer transistors has an area of 754 square millimeters. So the process is tiny.
This chip will be employed in data centers to handle complex AI tasks such as voice synthesis, machine vision and language translation. The solution isn’t just a processor either, it packs a new memory architecture approach which uses HBM2 memory. 32GB of it to be precise.

LIST OF COMPATIBLE WORKSTATIONS

|
|
|
|