O’REILLY、INTEL AI主办

English中文
将人工智能用起来
2019年6月18-21日
北京,中国

ONNX:开放和互操作平台让AI无处不在(AI everywhere: Open and interoperable platform for AI with ONNX)

此演讲使用中文 (This will be presented in Chinese)

Henry Zeng (Microsoft), Klein Hu (Microsoft), Emma Ning (Microsoft)
13:1013:50 Thursday, June 20, 2019

必要预备知识 (Prerequisite Knowledge)

  • Familiarity with basic concepts of the machine learning model lifecycle
  • A basic understanding of popular machines learning frameworks such as TensorFlow, PyTorch, sci-kit learn, etc.

您将学到什么 (What you'll learn)

  • Learn why there is so much industry support for open neural network exchange (ONNX) and how it helps data scientists and developers
  • Create ONNX models using many popular machine learning frameworks and tools
  • Deploy ONNX models to cloud or edge with a high performance runtime

描述 (Description)

ONNX was established in December 2017 as an open source format for machine learning models (deep learning and traditional ML). Backed by support from over 20 industry leading companies including Microsoft, Facebook, Amazon, Intel, NVIDIA, and more, ONNX provides data scientists with the choice to select the right tools for their task and offers software and hardware developers a common standard to build optimizations on.

Henry Zeng, Klein Hu, and Emma Ning discuss the scenarios that ONNX enables with a technical overview of the format itself. You can obtain an ONNX model in several ways, including selecting popular pretrained models from the ONNX Model Zoo, exporting/converting an existing model trained on another framework (including PyTorch/Caffe2, CNTK, Keras, sci-kit learn, TensorFlow, Chainer, and more), or training a new model using services such as Azure Machine Learning or Azure Custom Vision Service. Henry, Klein, and Emma demystify the process and show several examples of how this can be done easily.
 
The ONNX model can then be operationalized using an inference runtime such as ONNX Runtime on a variety of hardware endpoints. Hardware companies are plugging in accelerators to provide maximum efficiency in latency and resource utilization on cloud and edge. Henry, Klein, and Emma discuss how Intel, NVidia, and others are participating and the performance gains they’re seeing on the own models at Microsoft.

Photo of Henry Zeng

Henry Zeng

Microsoft

Henry Zeng is a principal program manager on the AI platform team at Microsoft, where he works with the engineering team, partners, and customers to ensure AzureML is the best ML platform on Cloud. He’s been in the AI and data area for more than 14 years in areas from database, big data, machine learning to deep learning. Previously, he was the lead AI solution architect in Microsoft China working with partners and customers to land AI solutions in manufactory, retail, finance, education, and public service, etc. with Microsoft AI offerings. Henry holds an MS in computer science from Wuhan University.

Klein Hu

Microsoft

Klein Hu is the senior software engineer in the Microsoft Azure machine learning team, focusing on the AI model inferencing area, especially ONNX model operationalization and acceleration with ONNX Runtime. Klein holds an MS in computer science from Beijing Normal University.

Photo of Emma Ning

Emma Ning

Microsoft

Emma Ning is senior program manager in the Microsoft Cloud and AI ML platform team, focusing on AI model operationalization and acceleration with ONNX/ONNXRuntime in support of Microsoft’s strategic investment for open and interoperable AI. She’s been driving search engine experience for more than five years and spent two years exploring adoption of AI among various businesses. Emma holds a MS in computer science from the Institute of Computing Technology, Chinese Academy of Sciences.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)