The tensor processing unit is a LSI designed by Google for neural network processing. The TPU features a large-scale systolic array matrix unit that achieves an outstanding performance-per-watt ratio. Kazunori Sato explains how a minimalistic design philosophy and a tight focus on neural network inference use cases enables the high-performance neural network accelerator chip.
张量处理单元（TPU，Tensor Processing Unit）是Google为神经网络运算而设计的一个LSI。 TPU具有大规模的脉动阵列矩阵单元，可实现出色的每瓦性能比。 在本次会议中，我们将了解如何使用简约的设计理念和对神经网络推理用例的关注来实现高性能神经网络加速器芯片。
Kaz Sato is a staff developer advocate on the cloud platform team at Google, where he leads the developer advocacy team for machine learning and data analytics products such as TensorFlow, the Vision API, and BigQuery. Kaz has been leading and supporting developer communities for Google Cloud for over seven years. He’s a frequent speaker at conferences, including Google I/O 2016, Hadoop Summit 2016 San Jose, Strata + Hadoop World 2016, and Google Next 2015 NYC and Tel Aviv, and he has hosted FPGA meetups since 2013.
©2018, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com