AMD Accelerated Data Center Event Live Blog


At 4 PM UTC, or in 1 hour from the date this post was made, AMD is going to premiere its Accelerated Data Center Keynote address by CEO Dr Lisa Su. The company is expected to make some major announcements for the enterprise space. This would also be the first major series of announcements by AMD after Intel’s launch of its Alder Lake 12th Gen Core processors that set the tone for what’s to come from Intel in the enterprise space (Xeon “Sapphire Rapids”). Since we’ve been teased of announcements both in the EPYC and Instinct brands, we could expect broadly two product lines:

First, the company could announce the EPYC “Milan-X” line of server processors leveraging 3D Infinity Cache memory, a tripling in L3 cache amount for the processors, which the company claims significantly improves performance of memory-intensive applications. This should also give you an idea if any upcoming Ryzen desktop processor based on the refreshed chiplet could live up to its claim of “up to 15% gaming performance boost.” The next-generation Instinct MI200 series compute accelerators are equally important as the bring the CDNA2 compute architecture to centerstage, taking the competition to NVIDIA’s A-series Tensor Core processors, and Intel’s upcoming “Ponte Vecchio” Xe-HPC accelerators. As the stream goes begins in 1 hour from now, we will go live.

15:59 UTC: It’s time to get the show on the road, as CEO Dr Lisa Su takes center-stage.

16:02 UTC: AMD categorizes the four workloads dominating datacenters today.

16:04 UTC: New cores, new packaging tech, new CPUs and GPUs

16:05 UTC: New EPYC processor reveal, all new socket

16:06 UTC: Meta joins AMD EPYC cloud computing ecosystem

16:06 UTC: Looks like a new socket for sure

16:07 UTC: Chiplets with 3D Infinity Cache confirmed

16:08 UTC: TSMC 3D chipset technology leveraged for 3D Infinity Cache

16:09 UTC: AMD Milan-X EPYC, existing socket, with up to 64 cores, but a mammoth 804 MB cache per socket

16:10 UTC: Fully compatible with SP3 platforms with a UEFI update,

16:11 UTC: First view of 3D V cache on AMD EPYC

16:12 UTC: Technical computing the focus of this processor. Memory intensive applications.

16:12 UTC: 96 MB L3 cache per chiplet reduces memory subsystem latencies significantly

16:13 UTC: EDA verification load 66% faster than competing Intel solution

16:15 UTC:

16:15 UTC: 3D V cache impacts a broad set of compute applications

16:16 UTC: Azure to debut “Milan-X” processor-powered instances.

16:18 UTC: Q1-2022 general availability of Milan-X

16:20 UTC: We now move on to CDNA2 compute processors.

16:20 UTC: Instinct MI200. 20% faster and 4.9x faster HPC performance than “competition.”

16:22 UTC: 58 billion transistors, TSMC 6 nm, 220 compute units, 128 GB HBM2E memory

16:23 UTC: Two form-factors MI200 comes in.

16:23 UTC: AMD just beat Intel to multi-die GPUs, since Intel canned Xe-HP

16:24 UTC: Performance claims:

16:25 UTC: 3.2 TB/s memory bandwidth.

16:26 UTC: More competitive performance claims

16:28 UTC: Debuting 3rd Gen Infinity Fabric, 800 GB/s aggregate bandwidth, and memory coherence

Update 16:29 UTC: First picture of Oakridge National Labs Frontier, first exascale supercomputer:

16:34 UTC: Genoa, is Zen 4-based, built on 5 nm, AMD claims that Genoa will be the “highest performance processor for gen-purpose compute”

16:35 UTC: Up to 96 cores, PCIe Gen 5, CXL, DDR5 memory

16:36 UTC: Zen4c is optimized for scale-out cloud performance, “Bergamo” EPYC processor with 128 cores, same I/O as “Genoa”

16:37 UTC: Updated roadmap

16:38 UTC: And that’s a wrap. A spritely series of major updates that should shake things up in the Intel camp. The core count increase to 96~128, along with the expected generational IPC increase, and next-gen I/O could be AMD’s play against the Xeon “Sapphire Rapids.” Thanks for joining us.