Weíve been computer hardware geeks for a long time, so weíre pretty used to habituating to new performance levels on a regular basis. Itís not hard to remember wondering how the heck someone would ever fill that 1GB hard drive or why anyone would need 128MB of RAM. In spite of this, the Radeon Pro SSG is still a surprising card which really exceeds expectations even for us hardened high-end computing folk.
First of all, the GPU is more or less the same model we saw on the Vega Frontier Edition. Which is to say that itís a 4096 stream-processor unit that puts out about 12.29 TFLOPS at single-precision floating point operation level. If youíre using 16-bit data types even greater acceleration is possible with double-rate 16-bit math. That includes many image manipulation and video processing jobs.
AMD says this means the GPU can handle 8K RAW footage at 30+ frames per second. This puts it in an exclusive club of one, for those people who are somehow already creating 8K content.
The GPU is not the most remarkable technological feature of the SSG however. The real mystery comes from itís unique setup. We have 16GB of 2096-bit HBM2 memory, which is pretty crazy on its own. It seems like just yesterday we saw the Radeon R9 Fury X with its fast but limited HBM memory. Thanks to the state of the technology then, you couldnít have more than 4GB of it.
Back then we said that this would limit professional applications, but clearly that concern is a thing of the past.
Incredibly this HBM breakthrough isnít the star attraction either. No, the ďSSGĒ in the name of this product derives from its secondary storage. You heard that right, this is a GPU card that also has secondary storage, the way a computer has an SSD or hard drive.
ďSSGĒ is short for solid-state graphics memory and it mostly what it sounds like. Itís high-speed solid state storage thatís on the actual card and can be accessed by the GPU through custom NVMEs. This means that if you want to use the card for data mining and analysis, it can now store Big Data sets directly in its own memory.
This has never been done before and any data scientist or GPU compute fan should take notice immediately. Itís a weird card, but there is method to this madness. Only time will tell if this is a trend that Nvidia will for once be the one to copy.