The unreasonable effectiveness of transfer learning on natural language processing

This will be presented in English.

David Low (Pand.ai)
11:1511:55 Thursday, June 20, 2019
Average rating: *****
(5.00, 1 rating)

必要预备知识 (Prerequisite Knowledge)

  • A basic understanding of machine learning, deep learning, and NLP concepts

您将学到什么 (What you'll learn)

  • Understand how transfer learning allows data scientists to build accurate models without much data
  • Discover how to apply transfer learning onto NLP tasks

描述 (Description)

In the past few months, NLP has witnessed several breakthroughs with transfer learning, namely ELMo, OpenAI Transformer, universal language model fine-tuning (ULMFiT), and BERT. Pretrained models derived from these techniques have achieved state-of-the-art results on a wide range of NLP problems. The use of pretrained models has come a long way since the introduction of Word2vec and GloVe, and these two approaches are now considered as shallow in comparison.

David Low introduces you to the history of transfer learning, which has been a tremendous success in the computer vision field because of ImageNet competition. He explains why pretrained models are handy for tackling machine learning problems with limited data, and you’ll understand how they can be used as fixed feature extractors for downstream tasks and applications.

David walks you through the codes and steps to fine-tune a transfer learning model so that it achieves state-of-the-art accuracy (92%) on a real-world sentiment classification problem—the Amazon reviews dataset. It takes just 1,000 samples of training data to produce a model that achieves similar performance to a fastText-based model trained on a full dataset of 3.6 million samples.

Photo of David Low

David Low


David Low is the cofounder and chief data scientist at Pand.ai, a company building an AI-powered chatbot to disrupt and shape the booming conversational commerce space with deep natural language processing. He represented Singapore and the National University of Singapore (NUS) in the 2016 Data Science Games held in France, and clinched the top spot among Asian and American teams. David has been invited as a guest lecturer by NUS to conduct master classes on applied machine learning and deep learning topics. Throughout his career, David has engaged in data science projects across manufacturing, telco, ecommerce, and the insurance industry, including sales forecast modeling and influencer detection, which won him awards in several competitions and was featured on the IDA website and the NUS publication. Previously, he was a data scientist at the Infocomm Development Authority (IDA) of Singapore and was involved in research collaborations with Carnegie Mellon University (CMU) and Massachusetts Institute of Technology (MIT) on projects funded by the National Research Foundation and SMART. He competes on Kaggle and holds a top 0.2% worldwide ranking.