Presented By O’Reilly and Intel AI
Put AI to work
April 10-11, 2018: Training
April 11-13, 2018: Tutorials & Conference
Beijing, CN

The tensor processing unit: A processor for neural network designed by Google

This will be presented in English.

Kaz Sato (Google)
13:1013:50 Friday, April 13, 2018
实施人工智能 (Implementing AI), 英文讲话 (Presented in English)
Location: 紫金大厅B(Grand Hall B) Level: Intermediate
Secondary topics:  AI应用的硬件、软件栈(Hardware and Software stack for AI applications)

必要预备知识 (Prerequisite Knowledge)

机器学习基础
A basic understanding of machine learning

您将学到什么 (What you'll learn)

讲述Google的tensor处理单元
Explore Google's tensor processing unit

描述 (Description)

本讲话将用英语授课,同时会提供中文同声传译。中文版本摘要会在英文摘要下面给出。

The tensor processing unit is a LSI designed by Google for neural network processing. The TPU features a large-scale systolic array matrix unit that achieves an outstanding performance-per-watt ratio. Kazunori Sato explains how a minimalistic design philosophy and a tight focus on neural network inference use cases enables the high-performance neural network accelerator chip.

张量处理单元(TPU,Tensor Processing Unit)是Google为神经网络运算而设计的一个LSI。 TPU具有大规模的脉动阵列矩阵单元,可实现出色的每瓦性能比。 在本次会议中,我们将了解如何使用简约的设计理念和对神经网络推理用例的关注来实现高性能神经网络加速器芯片。

Photo of Kaz Sato

Kaz Sato

Google

Kaz Sato is a staff developer advocate on the Cloud Platform team at Google, where he leads the developer advocacy team for machine learning and data analytics products such as TensorFlow, the Vision API, and BigQuery. Kaz has been leading and supporting developer communities for Google Cloud for over seven years. He is a frequent speaker at conferences, including Google I/O 2016, Hadoop Summit 2016 San Jose, Strata + Hadoop World 2016, and Google Next 2015 NYC and Tel Aviv, and has hosted FPGA meetups since 2013.