Google is making its own chips because its cloud business is too important

Shiny new TPUs.
Shiny new TPUs.
Image: Google
We may earn a commission from links on this page.

Google would like you to know about its teraflops.

The search giant, fighting an uphill battle against Microsoft and Amazon for market share in the cloud business, announced a new generation of its custom AI chips today, the second version of its Tensor Processing Units.

Google is betting that the cloud is its next big business, and its prowess in AI gives it a leg up. This new chip is designed to extend that advantage. While CPUs are needed for every modern computer, GPUs were later developed to help ease the load of complex modern graphics. The GPU architecture incidentally was advantageous for modern AI, due to its purpose of processing many small operations, like the colors of pixels, all at once. Google’s TPU represents a next generation of chip, one custom-built for the task of handling AI.

While the previous TPUs were used in Google Photos and speech recognition, as well as for its AlphaGo system that beat world-leading Go player Lee Sedol, the new hardware will be available for developers on Google’s cloud service. The new chips nearly double the speed of Google’s previous version built in 2015, able to perform 180 teraflops (FLOPS, floating operations per second) against the previous generation’s 92 teraflops.

These chips are housed in Google data centers and support its cloud business, which targets companies that want to add artificial intelligence to their operations when renting processing power. Faster chips are an obvious marketing advantage, but the speed also allows Google to pack more operations—and therefore money-making capacity—into a 24-hour day.

Besides the speed that the chips offer, the new TPUs can also be used to train deep neural networks, a popular flavor of modern AI, something Google’s previous hardware could not do. Jeff Dean, the head of Google’s AI efforts, told reporters on a call Tuesday that if developers coded for this new hardware, they could train algorithms that were far more complex than standard GPUs or CPUs.

Google is also giving a select group of AI researchers outside the company access to a cluster of 1,000 new TPUs for free, as long as they publish the work they use the TPUs to accomplish. How Google will enforce that requirement is unknown.

If most of this is gibberish for you, be not afraid. The key takeaway is that Google isn’t relying on some third party to provide hardware innovations. Dean said that while CPUs and GPUs are still useful for some tasks, the company expects to rely on its own TPUs much more heavily in the future.