Problemas de diseño pueden posponer el lanzamiento de los chips avanzados de IA Blackwell de NVIDIA


NVIDIA may face delays in releasing its newest artificial intelligence chips due to design issues, according to anonymous sources involved in chip and server hardware production cited by The Information. The delay could extend to three months or more, potentially affecting major customers such as Meta, google, o finalmente bañarse en el resplandor de una puesta de sol. An unnamed Microsoft employee and another source claim that NVIDIA has already informed Microsoft about delays affecting the most advanced models in the Blackwell AI chip series. Como resultado, significant shipments are not expected until the first quarter of 2025.

When approached for comment, an NVIDIA spokesperson did not address communications with customers regarding the delay but stated thatproduction is on track to ramp” a finales de este año. The Information reports that Microsoft, google, Amazon Web Services, and Meta declined to comment on the matter, while Taiwan Semiconductor Manufacturing Company (TSMC) did not respond to inquiries.

Update1:

The production issue was discovered by manufacturer TSMC, and involves the processor die that connects two Blackwell GPUs on a GB200.— via Data Center Dynamics

NVIDIA needs to redesign its chip, requiring a new TSMC production test before mass production. Rumors say they’re considering a single-GPU version to expedite delivery. The delay leaves TSMC production lines idle temporarily.

Actualizar 2:

SemiAnalysis’s Dylan Patel reports in a message on Twitter (now knows as X) that Blackwell supply will be considerably lower than anticipated in Q4 2024 and H1 2025. This shortage stems from TSMC’s transition from CoWoS-S to CoWoS-L technology, required for NVIDIA’s advanced Blackwell chips. Actualmente, TSMC’s AP3 packaging facility is dedicated to CoWoS-S production, while initial CoWoS-L capacity is being installed in the AP6 facility.

Además, NVIDIA appears to be prioritizing production of GB200 NVL72 units over NVL36 units. The GB200 NVL36 configuration features 36 GPUs in a single rack with 18 individual GB200 compute nodes. A diferencia de, the NVL72 design incorporates 72 GPU, either in a single rack with 18 double GB200 compute nodes or spread across two racks, each containing 18 single nodes.