O’REILLY、INTEL AI主办

English中文
将人工智能用起来
2019年6月18-21日
北京,中国

Efficient Deep Learning for the Edge

This will be presented in English.

Bichen Wu (UC Berkeley)
13:1013:50 Thursday, June 20, 2019

必要预备知识 (Prerequisite Knowledge)

Basic idea of deep neural networks.

您将学到什么 (What you'll learn)

Using AutoML can greatly reduce the cost of efficient deep learning development; Using SW/HW co-design can greatly improve the performance of edge AI systems; Using domain adaptation can greatly reduce the cost of data collection/annotation for AI training.

描述 (Description)

The success of deep neural networks is attributed to three factors: stronger computing capacity, more complex neural networks, and more data. These factors, however, are usually not available when we apply DNNs to edge applications such as autonomous driving, augmented and virtual reality (AR/VR), internet-of-things (IoT), and so on. Training DNNs requires a large amount of data, which can be difficult to obtain. Edge devices such as mobile phones or IoT devices have limited computing capacity, which requires specialized and efficient DNNs. However, due to the diversity and complexity of hardware devices, the enormous design space of DNNs, and prohibitive training costs, designing efficient DNNs for target devices is a challenging task. In this talk, we introduce our recent works addressing these problems. First, we introduce SqueezeSegV2, an efficient DNN for LiDAR point cloud segmentation for autonomous driving. LiDAR point cloud data are extremely difficult to annotate. We bypass this by leveraging simulated data to train the network and adapting it to achieve a performance comparable to training on real data. Second, we introduce Synetgy, a DNN-model & hardware co-designed FPGA accelerator that achieves 16.9x speedup over the previous state-of-the-art. Lastly, we introduce DNAS, a differentiable neural architecture search framework for automatic DNN design. With a small computational cost (8GPUs for 1 day), DNAS discovers a family of DNNs called FBNet that outperform previous state-of-the-art models designed manually and automatically. For different target devices, DNAS automatically adapt DNN architectures accordingly to optimize for latency while maintaining accuracy.

Photo of Bichen Wu

Bichen Wu

UC Berkeley

Bichen Wu is a PhD candidate at UC Berkeley, where he focuses on deep learning, computer vision, and autonomous driving.

Leave a Comment or Question

Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?

Join the conversation here (requires login)