Google says its AI chips smoke CPUs, GPUs in performance tests

The TPUs are faster at neural net inference, and excel at performance per watt

Four years ago, Google was faced with a conundrum: if all its users hit its voice recognition services for three minutes a day, the company would need to double the number of data centers just to handle all of the requests to the machine learning system powering those services.

Rather than buy a bunch of new real estate and servers just for that purpose, the company embarked on a journey to create dedicated hardware for running machine- learning applications like voice recognition.

The result was the Tensor Processing Unit (TPU), a chip that is designed to accelerate the inference stage of deep neural networks. Google published a paper on Wednesday laying out the performance gains the company saw over comparable CPUs and GPUs, both in terms of raw power and the performance per watt of power consumed.

A TPU was on average 15 to 30 times faster at the machine learning inference tasks tested than a comparable server-class Intel Haswell CPU or Nvidia K80 GPU. Importantly, the performance per watt of the TPU was 25 to 80 times better than what Google found with the CPU and GPU.

Driving this sort of performance increase is important for Google, considering the company’s emphasis on building machine learning applications. The gains validate the company’s focus on building machine learning hardware at a time when it’s harder to get massive performance boosts from traditional silicon.

This is more than just an academic exercise. Google has used TPUs in its data centers since 2015 and they’ve been put to use improving the performance of applications including translation and image recognition. The TPUs are particularly useful when it comes to energy efficiency, which is an important metric related to the cost of using hardware at massive scale.

One of the other key metrics for Google’s purposes is latency, which is where the TPUs excel compared to other silicon options. Norm Jouppi, a distinguished hardware engineer at Google, said that machine learning systems need to respond quickly in order to provide a good user experience.

“The point is, the internet takes time, so if you’re using an internet-based server, it takes time to get from your device to the cloud, it takes time to get back,” Jouppi said. “Networking and various things in the cloud — in the data center — they takes some time. So that doesn’t leave a lot of [time] if you want near-instantaneous responses.”

Google tested the chips on six different neural network inference applications, representing 95 percent of all such applications in Google’s data centers. The applications tested include DeepMind AlphaGo, the system that defeated Lee Sedol at Go in a five-game match last year.

The company tested the TPUs against hardware that was released around roughly the same time to try and get an apples-to-apples performance comparison. It's possible that newer hardware would at least narrow the performance gap.

There’s still room for TPUs to improve, too. Using the GDDR5 memory that’s present in an Nvidia K80 GPU with the TPU should provide a performance improvement over the existing configuration that Google tested. According to the company’s research, the performance of several applications was constrained by memory bandwidth.

Furthermore, the authors of Google’s paper claim that there’s room for additional software optimization to increase performance. The authors called out one of the tested convolutional neural network applications (referred to in the paper as CNN1) as a candidate. However, because of existing performance gains from the use of TPUs, it’s not clear if those optimizations will take place.

While neural networks mimic the way neurons transmit information in humans, CNNs are modeled specifically on how the brain processes visual information.

“As CNN1 currently runs more than 70 times faster on the TPU than the CPU, the CNN1 developers are already very happy, so it’s not clear whether or when such optimizations would be performed,” the authors wrote.

TPUs are what’s known in chip lingo as an application-specific integrated circuit (ASIC). They’re custom silicon built for one task, with an instruction set hard-coded into the chip itself. Jouppi said that he wasn’t overly concerned by that, and pointed out that the TPUs are flexible enough to handle changes in machine learning models.

“It’s not like it was designed for one model, and if someone comes up with a new model, we’d have to junk our chips or anything like that,” he said.

Google isn’t the only company focused on using dedicated hardware for machine learning. Jouppi said that he knows of several startups working in the space, and Microsoft has deployed a fleet of field-programmable gate arrays in its data centers to accelerate networking and machine learning applications.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Google

More about GoogleIntelMicrosoftNvidia

Show Comments
[]