English中文
PUT AI TO WORK
June 18-21, 2019
Beijing, CN

ONNX:开放和互操作平台让AI无处不在(AI everywhere: Open and interoperable platform for AI with ONNX)

此演讲使用中文 (This will be presented in Chinese)

Henry Zeng (Microsoft), Klein Hu (Microsoft), Emma Ning (Microsoft)
13:1013:50 Thursday, June 20, 2019

必要预备知识 (Prerequisite Knowledge)

  • Familiarity with basic concepts of the machine learning model lifecycle
  • A basic understanding of popular machines learning frameworks such as TensorFlow, PyTorch, and sci-kit learn

您将学到什么 (What you'll learn)

  • Understand why there is so much industry support for open neural network exchange (ONNX) and how it helps data scientists and developers
  • Learn how to create ONNX models using many popular machine learning frameworks and tools and deploy ONNX models to cloud or edge with a high-performance runtime

描述 (Description)

ONNX was established in December 2017 as an open source format for machine learning models (deep learning and traditional ML). Backed by support from over 20 industry leading companies including Microsoft, Facebook, Amazon, Intel, NVIDIA, and more, ONNX provides data scientists with the choice to select the right tools for their task and offers software and hardware developers a common standard to build optimizations on.

Henry Zeng, Klein Hu, and Emma Ning discuss the scenarios that ONNX enables with a technical overview of the format itself. You can obtain an ONNX model in several ways, including selecting popular pretrained models from the ONNX Model Zoo, exporting or converting an existing model trained on another framework (including PyTorch/Caffe2, CNTK, Keras, sci-kit learn, TensorFlow, Chainer, and more), or training a new model using services such as Azure Machine Learning or Azure Custom Vision Service. Henry, Klein, and Emma demystify the process and show several examples of how this can be done easily.
 
The ONNX model can then be operationalized using an inference runtime such as ONNX Runtime on a variety of hardware endpoints. Hardware companies are plugging in accelerators to provide maximum efficiency in latency and resource utilization on cloud and edge. Henry, Klein, and Emma discuss how Intel, NVidia, and others are participating and the performance gains they’re seeing on the own models at Microsoft.

Photo of Henry Zeng

Henry Zeng

Microsoft

Henry Zeng is a principal program manager on the AI platform team at Microsoft, where he works with the engineering team, partners, and customers to ensure AzureML is the best ML platform in the cloud. He’s been in the AI and data area for more than 14 years in areas such as database, big data, machine learning, and deep learning. Previously, he was the lead AI solution architect at Microsoft China, where he worked with partners and customers to land AI solutions in manufactory, retail, finance, education, and public service. Henry holds an MS in computer science from Wuhan University.

Photo of Klein Hu

Klein Hu

Microsoft

Klein Hu is the senior software engineer on the Microsoft Azure machine learning team, focusing on the AI model inferencing area, especially ONNX model operationalization and acceleration with ONNX Runtime. Klein holds an MS in computer science from Beijing Normal University.

Photo of Emma Ning

Emma Ning

Microsoft

Emma Ning is senior program manager on the Microsoft cloud and AI ML platform team, focusing on AI model operationalization and acceleration with ONNX/ONNXRuntime in support of Microsoft’s strategic investment for open and interoperable AI. She’s been driving search engine experience for more than five years and spent two years exploring adoption of AI among various businesses. Emma holds a MS in computer science from the Institute of Computing Technology, Chinese Academy of Sciences.