AMD EPYC 9005 5th-Gen Turin Zen 5 CPUs Arrive With Up to 192 Cores To Advance AI
Indeed, just look at this slide from AMD. The company was struggling in the server market when the original EPYC processors came on the scene, as the venerable Opteron brand had lost most of its cachet by 2018. It took a couple of years in development, but AMD has roared back to prominence in the enterprise, with some 34% of the server market now versus Intel's historic total dominance.
AMD's Zen 5 Arrives To AI And Cloud Servers With EPYC 9005 Series
Today we're looking at AMD's new EPYC 9005 processors, and if you know a bit about them from previous rumblings, there's ample reason to be excited. If you don't know what the buzz is all about, please consult AMD's handy-dandy decoder diagram above. In short, the EPYC 9005 family, code-named "Turin", are the highest-performance server processors from AMD, based on its latest Zen 5 microarchitecture.
Yes, indeed—Zen 5 has found its way to Socket SP5, the same platform as the company's extant "Genoa" processors. It brings along all the bounty of the Zen 5 architecture, which we've already covered in detail, so we won't revisit it at length. The short version is that the functional units of the core have been fully re-designed from the ground up with an emphasis on massive AVX-512 throughput. EPYC 5th-gen also brings along a slightly bigger variety of capabilities, including chips tuned for up to 5 GHz operation as well as parts with as many as 384 logical cores in a single socket.
That's right: today's launch actually includes two separate designs of EPYC CPU all captured under the "Turin" umbrella. For those who want the best single-threaded performance, there are the EPYC 9005 processors based on standard Zen 5 CPU cores, and then for folks who want massive multi-core throughput, there are EPYC 9005 processors using the condensed Zen 5c. The latter chips actually use fewer CCDs compared to the standard model, as the CCDs come with two CCXs on board, each with eight cores. That's right: a full sixteen Zen 5c cores per CCD on the top-end 'Turin'.
The biggest change in EPYC "Turin" is obviously the move to the Zen 5 architecture, but that's not all that's new. The peak TDP of 5th-Gen EPYC is now a scorching five-hundred watts, while the maximum memory speed has increased to 6400 MT/s. The ECC DDR5 controller now supports Dynamic Post Package Repair on supported DIMMs, and the PCIe controllers support link encryption for the first time, as well as CXL 2.0.
AMD EPYC 9005 Series Performance Claims
AMD makes a lot of performance claims and comparisons in the slide deck, including the above bombshell that 5th-gen EPYC will offer as much as a 37% performance-per-clock upgrade over the previous generation. Many people in the audience will likely recall AMD's inflated IPC gain numbers from the launch of the Ryzen 9000 Zen 5 processors, but these are more likely to be legit, both because the enterprise department can't afford to mess around the way the consumer division can, and also because it's likely that HPC and AI workloads do in fact benefit tremendously from the AVX-512-centered architectural upgrades in Zen 5.
We're not going to go over every single performance claim that AMD makes in its slide deck, but the company isn't shy about direct comparisons against Intel's competing Xeon parts from the Emerald Rapids family. The company claims that, in video encoding, a 192-core EPYC part can offer quadruple the performance of a 64-core Xeon, and slightly more than double the performance of a 96-core EPYC "Genoa" part. Fair enough, given it has double the cores, but we have to point out that the TDP goes up by 140 watts, too.
We were more impressed by this slide, where AMD levels out the core counts and compares apples-to-apples. According to AMD, EPYC "Turin" offers 60% better performance per-core than 5th-gen Xeon across a variety of Finite Element Analysis and Computational Fluid Dynamics simulations. These are both types of detailed physical simulations that seek to model physical forces on simulated objects.
EPYC CPUs have traditionally been more about core count than per-core performance, so the clock rates have usually topped out around 4 GHz. Well, this generation, AMD is introducing multiple hot-clocked EPYC CPUs, including the 16-core EPYC 9175F, the 32-core EPYC 9375F, the 48-core EPYC 9475F, and the 64-core EPYC 9575F. These parts may have lower core counts, but they can boost as high as 5 GHz—a direct attack on an area that Intel has historically dominated.
AMD says that the speedy 64-core EPYC 9575F offers the best performance on the market for use as a host processor with compute-heavy GPU accelerators. Using eight Instinct MI300X GPUs, AMD claims up to 20% better performance under an AI training workload with the EPYC 9575F as the host CPU, versus an Intel Xeon 8592+. That could translate to a lot of saved time in a big AI cluster.
AMD and NVIDIA have already partnered to offer EPYC CPUs in NVIDIA's HGX and MGX networked compute servers in the past, and that continues with the 5th-gen EPYC chips. The slide above details which EPYC processors will be available in which system configurations, but all of them come with NVIDIA's GPUs onboard. According to AMD, a configuration with eight Hopper H100 GPUs offers 20% better performance on AI inference when those GPUs are paired with an EPYC 7575F than with a Xeon 8592+.
AMD Infinity Guard For Trusted IO Security
It's little more than a footnote in AMD's slide deck, but Gen 5 EPYC does introduce the concept of Trusted I/O, extending the trust boundary to include external devices. In essence, this is the next layer of "Infinity Guard", which is what AMD calls its security package including multiple layers of hardware- and firmware-based protections. Trusted I/O with PCIe link encryption makes even physical layer attacks against secured servers more difficult, offering datacenter owners greater peace of mind.
Finally, here's the chart you've probably been waiting for. This shows the entirety of the EPYC "Turin" lineup, at least at launch, and includes pricing for the new parts. You'll notice that the chips using the condensed Zen 5c cores have one-half of the L3 cache per-core compared to the Zen 5 chips; that's the same as it was between the Zen 4 'Genoa' and the Zen 4c 'Bergamo'. Depending on the workload, that may not matter much.
According to AMD, its new EPYC CPUs are the most powerful on the planet for "cloud, enterprise, & AI". We're eager to dig into our own independent benchmarks to verify AMD's bold claims of big per-core performance jumps, perhaps like the previous server CPU comparisons we've done. If you're keen to get your hands on one of the new chips, start pestering your favorite server OEM (Dell, HP, Lenovo, Supermicro, etc.) sooner than later, because these parts are likely to be in very high demand.