Building end-to-end AI applications is challenging, and building the next generation of AI applications, such as online learning and reinforcement learning (RL) is even more challenging. That is because these applications exhibit a large variety of computational patterns (e.g., data processing, simulations, model training, model serving), and none of the existing frameworks can efficiently support all these patterns at scale.
In this tutorial, we will illustrate how Ray can seamlessly and efficiently support these computational patterns, and hence provides an ideal platform for building AI applications. This tutorial will be hands on, and participants will take a deep dive into Ray, learn its API, and implement several state-of-the-art AI applications including an end-to-end application which involves training an RL model and serving predictions from it.
Richard Liaw is a PhD student in the BAIR Lab and RISELab at UC Berkeley working with Joseph Gonzalez, Ion Stoica, and Ken Goldberg. He has worked on a variety of different areas, ranging from robotics to reinforcement learning to distributed systems. He is currently actively working on Ray, a distributed execution engine for AI applications; RLlib, a scalable reinforcement learning library; and Tune, a distributed framework for model training.
Help us make this conference the best it can be for you. Have questions you'd like this speaker to address? Suggestions for issues that deserve extra attention? Feedback that you'd like to share with the speaker and other attendees?
Join the conversation here (requires login)
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • email@example.com