O’REILLY、INTEL AI主办

English中文
将人工智能用起来
2019年6月18-21日
北京,中国

Efficient deep learning for the edge

This will be presented in English.

Bichen Wu (UC Berkeley)
13:1013:50 Thursday, June 20, 2019

必要预备知识 (Prerequisite Knowledge)

  • A basic understanding of deep neural networks (DNNs)

您将学到什么 (What you'll learn)

  • Understand that using AutoML can greatly reduce the cost of efficient deep learning development, software/hardware (SW/HW) codesign can greatly improve the performance of edge AI systems, and domain adaptation can greatly reduce the cost of data collection/annotation for AI training

描述 (Description)

The success of DNNs is attributed to three factors: stronger computing capacity, more complex neural networks, and more data. However, you’re usually unable to access these factors when applying DNNs to edge applications such as autonomous driving, augmented reality (AR) and virtual reality (VR), IoT, and so on. Training DNNs requires a large amount of data, which can be difficult to obtain. Edge devices such as mobile phones or IoT devices have limited computing capacity, which requires specialized and efficient DNNs. Due to the diversity and complexity of hardware devices, the enormous design space of DNNs, and prohibitive training costs, designing efficient DNNs for target devices is a challenging task.

Join Bichen Wu as he details recent works addressing these problems. SqueezeSegV2 is an efficient DNN for lidar point cloud segmentation for autonomous driving. Lidar point cloud data is extremely difficult to annotate, but you can bypass this by leveraging simulated data to train the network and adapting it to achieve a performance comparable to training on real data. Synetgy is a DNN model and hardware codesigned FPGA accelerator that achieves 16.9 times speedup over the previous state of the art. The final work Bichen explains is a differentiable neural architecture search (DNAS) framework for automatic DNN design. With a small computational cost (8 GPUs for one day), DNAS discovers a family of DNNs called FBNet that outperforms previous state-of-the-art models designed manually and automatically. For different target devices, DNAS automatically adapts DNN architectures accordingly to optimize for latency while maintaining accuracy.

Photo of Bichen Wu

Bichen Wu

UC Berkeley

Bichen Wu is a PhD candidate at UC Berkeley, where he focuses on deep learning, computer vision, and autonomous driving.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)