Conceptions de modèles NVIDIA PrefixRL 25% Circuits plus petits, Rendre les GPU plus efficaces


Lors de la conception de circuits intégrés, les ingénieurs visent à produire une conception efficace qui est plus facile à fabriquer. S'ils parviennent à réduire la taille du circuit, the economics of manufacturing that circuit is also going down. NVIDIA has posted on its technical blog a technique where the company uses an artificial intelligence model called PrefixRL. Using deep reinforcement learning, NVIDIA uses the PrefixRL model to outperform traditional EDA (Electronics Design Automation) tools from major vendors such as Cadence, Synopsis, or Siemens/Mentor. EDA vendors usually implement their in-house AI solution to silicon placement and routing (PnR); cependant, NVIDIA’s PrefixRL solution seems to be doing wonders in the company’s workflow.

Creating a deep reinforcement learning model that aims to keep the latency the same as the EDA PnR attempt while achieving a smaller die area is the goal of PrefixRL. According to the technical blog, the latest Hopper H100 GPU architecture uses 13,000 instances of arithmetic circuits that the PrefixRL AI model designed. NVIDIA produced a model that outputs a 25% smaller circuit than comparable EDA output. This is all while achieving similar or better latency. Below, you can compare a 64-bit adder design made by PrefixRL and the same design made by an industry-leading EDA tool.

Training such a model is a compute-intensive task. NVIDIA reports that the training to design a 64-bit adder circuit took 256 CPU cores for each GPU and 32,000 GPU hours. The company developed Raptor, an in-house distributed reinforcement learning platform that takes unique advantage of NVIDIA hardware for this kind of industrial reinforcement learning, which you can see below and how it operates. Overall, the system is pretty complex and requires a lot of hardware and input; cependant, the results pay off with smaller and more efficient GPUs.