Google is using AI to design AI processors that are much faster than humans

General
Google is using AI to design AI processors that are much faster than humans

From games to image upscaling to smartphone "personal assistants," artificial intelligence is used virtually everywhere these days. More than ever, researchers are devoting enormous amounts of time, money, and effort to designing AI. At Google, AI algorithms are even being used to design AI chips.

Google is not dealing with the complete silicon design, but a subset of chip design called placement optimization. This is a time-consuming task for humans; as explained in IEEE Spectrum (via LinusTechTips), it involves placing blocks of logic and memory (or clusters of those blocks) in strategic areas to both performance and power efficiency including maximizing the use of available space.

It may take a team of engineers weeks to figure out the ideal placement. In contrast, Google's neural network can produce a better design for the Tensor processing unit in less than 24 hours. This is similar in concept to the Tensor core used by Nvidia in its Turing-based GeForce RTX graphics cards, only with different goals in mind.

While interesting in its own right, the type of AI Google is using is equally interesting. Rather than utilizing a deep learning model, which requires training the AI on large data sets, Google uses a "reinforcement learning" system. In brief, RL models learn.

The RL model moves in the right direction because of the reward system involved. In this case, the reward is a combination of power reduction, performance improvement, and area reduction. This is a bit of a simplification, but basically, the more we design Google's AI, the better it will be at the task at hand (manufacturing AI chips).

"We believe that it is the AI itself that provides the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, each driving the progress of the other," explains a Google researcher. If this works for Google, it seems inevitable that AMD, Intel, and Nvidia will eventually try the same approach. [Technical details can be found in a paper posted on Arxiv.

Thanks, LinusTechTips.

Categories