O’REILLY、INTEL AI主办

English中文
将人工智能用起来
2019年6月18-21日
北京,中国

ONNX:开放和互操作平台让AI无处不在(AI everywhere: Open and interoperable platform for AI with ONNX)

此演讲使用中文 (This will be presented in Chinese)

Prasanth Pulavarthi (Microsoft), Henry Zeng (Microsoft)
13:1013:50 Thursday, June 20, 2019

必要预备知识 (Prerequisite Knowledge)

- Understand the basic concepts of machine learning model lifecycle - Understand popular machines learning framework such as Tensorflow, PyTorch, ScikitLearn, etc

您将学到什么 (What you'll learn)

By attending the session, the audience will know: -Why is there so much industry support for ONNX and how it helps data scientists and developers -How to create ONNX models using many popular machine learning frameworks and tools -How to deploy ONNX models to cloud or edge with a high performance runtime

描述 (Description)

ONNX (Open Neural Network Exchange) was established in December 2017 as an open source format for machine learning models (Deep Learning and traditional ML). Backed by support from over 20 industry leading companies including Microsoft, Facebook, Amazon, Intel, NVIDIA, and more, ONNX provides data scientists with the choice to select the right tools for their task, and offers software and hardware developers a common standard to build optimizations on. We will discuss the scenarios that ONNX enables with a technical overview of the format itself.
 
There are several ways to obtain an ONNX model, including selecting popular pre-trained models from the ONNX Model Zoo, exporting/converting an existing model trained on another framework (including PyTorch/Caffe2, CNTK, Keras, Scikit-Learn, Tensorflow, Chainer, and more), or training a new model using services such as Azure Machine Learning or Azure Custom Vision Service. We will demystify the process and show several examples of how this can be done easily.
 
The ONNX model can then be operationalized using an inference runtime such as ONNX Runtime on a variety of hardware endpoints. Hardware companies are plugging in their accelerators to provide maximum efficiency in latency and resource utilization on cloud and edge. We will discuss how Intel, NVidia, and others are participating and the performance gains we are seeing on our own models at Microsoft.

Photo of Prasanth Pulavarthi

Prasanth Pulavarthi

Microsoft

Prasanth leads the product management team for AI Frameworks at Microsoft. He is one of the founding members of ONNX and actively involved in the open source community.

Photo of Henry Zeng

Henry Zeng

Microsoft

Henry Zeng is a principal program manager in the Cloud AI Group at Microsoft, where he works with engineering team, partners and customers to ensure the success of ML platform. He has been in AI and data area for more than 10 years from database, NoSQL, Hadoop ecosystem, machine learning to deep learning. Prior to this role, he was the lead AI solution architect in Microsoft China working with partners and customer to land AI solutions in manufactory, retail, education and public service etc with Microsoft AI offerings. Henry holds a MS in computer science from Wuhan University.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)