You are here: Home > Computer Articles > The Cerebras CS-1 Rewrites the AI Hardware Book
Back to all articles


The Cerebras CS-1 Rewrites the AI Hardware Book
Published:


CPU development is still moving at a fast pace, although it might feel that we are only getting incremental improvements from year to year. Compare workstations of today to those of five years ago however, and you’ll see big improvements in performance, specialized capabilities and power consumption. The biggest improvement has to be the inclusion of more and more discrete CPU cores.

You no longer need a massive, expensive dual- or quad- socket motherboard to have a computer with a double-digit core count. The quad core is essentially dead as a mainstream CPU and thanks to processors such as AMDs Ryzen chips, even non-professional computers are packed with CPU cores.

However, sometimes leaps seem to happen all of a sudden, and the recently announced Cerebras CS-1 represents one of the biggest hardware leaps we’ve seen. If the claims are correct, this computer rewrites what we know about CPU architecture.

IMAGES PROPERTY OF CEREBRAS


The most striking thing about this computer is the CPU at the heart of it. It’s called the Wafer-Scale engine and, as the name suggests, it’s as large as an entire silicon wafer. The wafer that’s usually used to create many different CPUs.

This is the first and only TRILLION transistor CPU. These are used to form 400 000 AI cores. Making this the fastest computer in the world when it comes to processing AI-type tasks.

his monstrous CPU can ingest 1.2 Terabits of date per second, sucked in over 100 Gigabit Ethernet lanes. It works with existing machine learning software frameworks, such as TensorFlow and PyTorch. Which means data centers that are used for machine learning work can install a CS-1 today and it should just work!

The Wafer-Scale Engine

King of the Hill


The most striking thing about this computer is the CPU at the heart of it. It’s called the Wafer-Scale engine and, as the name suggests, it’s as large as an entire silicon wafer. The wafer that’s usually used to create many different CPUs.

This is the first and only TRILLION transistor CPU. These are used to form 400 000 AI cores. Making this the fastest computer in the world when it comes to processing AI-type tasks.

his monstrous CPU can ingest 1.2 Terabits of date per second, sucked in over 100 Gigabit Ethernet lanes. It works with existing machine learning software frameworks, such as TensorFlow and PyTorch. Which means data centers that are used for machine learning work can install a CS-1 today and it should just work!

Smaller Than You Think


The truly mind-blowing thing about the CS-1 is not its total computational power. We can get there using existing systems, through sheer brute force. The miracle here is that the CS-1 takes up 15 rack units in height and uses about a third of the total space in a typical datacenter rack. It runs off standard power outlets found in these centers already. An Impossible Problem Solved

Making a functional wafer-scale chip is something that’s been proposed, but we were pretty surprised to see a ready-to-buy product announced using this technology. Of course, we will have to see how well it works in principle after people actually start using it in data centers, but based on what’s claimed it really feels like a revolutionary product!