Presented By O’Reilly and Intel AI
Put AI to work
April 10-11, 2018: Training
April 11-13, 2018: Tutorials & Conference
Beijing, CN

The tensor processing unit: A processor for neural network designed by Google

This will be presented in English.

Kaz Sato (Google)
13:1013:50 Friday, April 13, 2018
Secondary topics:  AI应用的硬件、软件栈(Hardware and Software stack for AI applications)

必要预备知识 (Prerequisite Knowledge)

A basic understanding of machine learning

您将学到什么 (What you'll learn)

Explore Google's tensor processing unit

描述 (Description)


The tensor processing unit is a LSI designed by Google for neural network processing. The TPU features a large-scale systolic array matrix unit that achieves an outstanding performance-per-watt ratio. Kazunori Sato explains how a minimalistic design philosophy and a tight focus on neural network inference use cases enables the high-performance neural network accelerator chip.

张量处理单元(TPU,Tensor Processing Unit)是Google为神经网络运算而设计的一个LSI。 TPU具有大规模的脉动阵列矩阵单元,可实现出色的每瓦性能比。 在本次会议中,我们将了解如何使用简约的设计理念和对神经网络推理用例的关注来实现高性能神经网络加速器芯片。

Photo of Kaz Sato

Kaz Sato


Kaz Sato is a staff developer advocate on the cloud platform team at Google, where he leads the developer advocacy team for machine learning and data analytics products such as TensorFlow, the Vision API, and BigQuery. Kaz has been leading and supporting developer communities for Google Cloud for over seven years. He’s a frequent speaker at conferences, including Google I/O 2016, Hadoop Summit 2016 San Jose, Strata + Hadoop World 2016, and Google Next 2015 NYC and Tel Aviv, and he has hosted FPGA meetups since 2013.