Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators
In the published paper, researchers present PRIME – a framework that created AI processors based on a database of blueprints. The PRIME framework feeds off an offline database containing accelerator designs and their corresponding performance metrics (e.g., latency, power, etc.) to design next-generation hardware accelerators. According to Google, PRIME can do so without further hardware simulation and has processors ready for use. As per the paper, PRIME improves performance upon state-of-the-art simulation-driven methods by as much as 1.2x-1.5x. It also reduces the required total simulation time by 93% and 99%, respectively. The framework is also capable of architecting accelerators for unseen applications.
As far as comparing the test results, PRIME was in its prime time (pun intended). Google’s very own EdgeTPU was compared to PRIME-made design, and the AI-generated chip was faster with latency improvement of 1.85x, resulting in a much quicker design. Researchers also noticed a fascinating thing, and that the framework-generated architectures are smaller, resulting in a more minor, less power-hungry chip. You can read more about PRIME in this free-to-access paper here.