All systems and applications are composed from basic data structures and algorithms, such as index structures, priority queues, and sorting algorithms. Most of these primitives have been around since the early beginnings of computer science (CS) and form the basis of every CS intro lecture. Yet, we might soon face an inflection point: recent results show that machine learning has the potential to significantly alter the way those primitives are implemented and the performance they can provide.
Tim Kraska outlines different ways to build learned algorithms and data structures to achieve “instance optimality” and unprecedented performance for a wide range of applications.
Tim Kraska is an associate professor of electrical engineering and computer science in MIT’s Computer Science and Artificial Intelligence Laboratory and codirector of the Data System and AI Lab at MIT (DSAIL@CSAIL). His research focuses on building systems for machine learning and using machine learning for systems. Previously, Tim was an assistant professor at Brown, spent time at Google Brain, and was a postdoc in the AMPLab at UC Berkeley after his PhD at ETH Zurich. Tim’s a 2017 Alfred P. Sloan Research Fellow in computer science and received several awards including the 2018 VLDB Early Career Research Contribution Award, the 2017 VMware Systems Research Award, an NSF CAREER Award, as well as several best paper and demo awards at VLDB and ICDE.
©2019, O'Reilly Media, Inc. • (800) 889-8969 or (707) 827-7019 • Monday-Friday 7:30am-5pm PT • All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. • firstname.lastname@example.org