Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators


Designing Artificial Intelligence / Machine Learning hardware accelerators takes effort from hardware engineers in conjunction with scientists working in the AI/ML area itself. A few years ago, we started seeing AI incorporated into parts of electronic design automation (EDA) software tools, helping chip designers speed up the process of creating hardware. What we were “used to” seeing AI do are just a couple of things like placement and routing. And having that automated is a huge deal. However, it looks like the power of AI for chip design is not going to stop there. Researchers at Google and UC Berkeley have made a research project that helps AI design and develop AI-tailored accelerators smaller and faster than anything humans made.

In the published paper, researchers present PRIME – a framework that created AI processors based on a database of blueprints. The PRIME framework feeds off an offline database containing accelerator designs and their corresponding performance metrics (e.g., latency, power, etc.) to design next-generation hardware accelerators. According to Google, PRIME can do so without further hardware simulation and has processors ready for use. As per the paper, PRIME improves performance upon state-of-the-art simulation-driven methods by as much as 1.2x-1.5x. It also reduces the required total simulation time by 93% and 99%, respectively. The framework is also capable of architecting accelerators for unseen applications.

As far as comparing the test results, PRIME was in its prime time (pun intended). Google’s very own EdgeTPU was compared to PRIME-made design, and the AI-generated chip was faster with latency improvement of 1.85x, resulting in a much quicker design. Researchers also noticed a fascinating thing, and that the framework-generated architectures are smaller, resulting in a more minor, less power-hungry chip. You can read more about PRIME in this free-to-access paper here.