Ice Lake-SP is finally ready for prime time. Intel’s latest server CPU will debut in-market to power Intel’s 3rd Generation of Xeon Scalable processors, and close some performance gaps vis-a-vis AMD’s Epyc family.
Intel’s 2nd Generation Xeon Scalable family debuted almost exactly two years ago, only to be run over by AMD’s Epyc a few months later. While Intel still held an edge in specific workloads and scenarios, AMD’s ability to pack up to 64 cores in a socket, combined with features such as PCIe 4.0 support, has boosted the company’s server market share. AMD’s third-generation Epyc CPUs (codename Milan) debuted back in mid-March, and now we’ve got Ice Lake-SP to answer it.
Ice Lake-SP offers up to 40 CPU cores (an increase of 1.42x compared with top-end Cascade Lake), with TDPs ranging from 85W – 270W. Typical L3 cache allocation is 1.5MB per core for a maximum of 60MB, though there are a handful of midrange parts that offer as much as 2.25MB/core.
Intel’s Cascade Lake previously topped out at six memory channels, but Ice Lake-SP boosts this to eight and establishes parity with AMD’s Epyc. Adding additional cores has hit Intel’s top clocks — the 28-core Xeon Platinum 8280 has a base clock of 2.7GHz and a maximum single-core turbo clock of 4GHz, with an all-core boost clock of 3.7GHz/3.5GHz/2.9GHz depending on whether AVX2 or AVX-512 are used. The 40-core Xeon Scalable 8380 has a 3GHz all-core boost and a 3.4GHz maximum clock.
If we compare specifically at the 28-core mark, the Xeon Gold 6348 is replacing CPUs such as the Xeon Platinum 8280 or the Xeon Gold 6258R. The 2nd Gen Xeon Gold 6258R has a 2.7GHz base / 4GHz boost clock, while the 3rd Gen Xeon Gold 6348 has a 2.6GHz base clock, a 3.4GHz all-core Turbo, and a 3.5GHz single-core boost clock.
Intel’s claimed performance improvements are coming from ICL’s IPC (instructions per clock cycle) uplift over Intel’s older 14nm architectures and from its increased core count. The fact that Intel is only claiming a 1.46x average performance increase implies that some Ice Lake-SP CPUs clock a fair bit slower than their 14nm counterparts.
Intel has standardized some features on 3rd Gen Xeons and offers some SKUs at lower prices. All 3rd Gen Xeon Scalable CPUs support eight channels of DDR4-3200 DRAM at 2DPC, they support up to 4TB of RAM per socket, they all support SGX enclaves (though the size varies from 8GB to 512GB, with 64GB being standard). All chips use three UPI links for 11.2GT/s of bandwidth, they all offer 64 lanes of PCIe 4.0, and every new Xeon supports Optane. Intel has previously price-banded its CPUs by how much memory they supported, so the switch over to 4TB is a welcome one.
Anandtech has published a review of the 3rd Gen Xeon Scalable family and it highlights just how badly Intel has been competing in this market of late. On the one hand, Anandtech’s findings confirm that Intel’s marketing claims around this chip are basically honest. The gains are real and impressive:
Unfortunately, these gains aren’t enough for Intel to shake AMD. Anandtech finds that the Xeon 8380 offers up to a 1.18x improvement in performance per watt, but the mid-stack Xeon 6330 with 28 cores shows very little improvement compared with its Cascade Lake predecessor other than price; the 6330’s list price is roughly half of the Xeon Gold 6258R.
The problem with Ice Lake on server, as on laptop, is that Intel has to give back a fair chunk of its clock speed to win these improvements. Single-threaded boost clock is 3.4GHz instead of 4GHz, while all-core multi-threaded clocks have dropped about 10 percent.
Intel claims it will still deliver Sapphire Rapids in late 2021, but we expect the real volume shipments won’t start until Q1 2022 at the earliest. Ice Lake-SP is a large step forward for Intel, with real gains in efficiency and per-core performance, but it’s fighting back from a 2.28x gap in per-socket core counts. With Ice Lake-SP at 40 cores, AMD’s density advantage has dropped to “just” 1.6x. It wasn’t reasonable to expect Ice Lake-SP to close that gap in a single bound — and it doesn’t — but it shows enough per-thread and per-system uplift to put Intel on a path to compete more effectively with what AMD and ARM bring to the server market.
- AMD’s Milan Brings Zen 3 to Epyc, With Mostly Positive Results
- Intel Claws Back Market Share From AMD in Desktop, Mobile
- Intel May Change Its Process Node Numbering to Align With TSMC, Samsung