Loading…
Back To Schedule
Thursday, November 12 • 4:15pm - 4:45pm
Ray: A System for High-performance, Distributed Machine Learning Applications

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Ray is an open-source, distributed framework from U.C. Berkeley’s RISELab that easily scales Python applications from a laptop to a cluster. While broadly applicable, it was developed to solve the unique performance challenges of ML/AI systems, such as the heterogeneous tasks and state management required for reinforcement learning (RL), everything from training neural networks to running simulators. Ray is now used in many production deployments. I'll explain the problems that Ray solves for cluster-wide scaling of general Python applications with specific examples from RLlib, a Ray-based RL library. We’ll see that Ray’s features include rapid scheduling and execution of “tasks” for RL, management of distributed model state (parameters), and an intuitive API. You’ll also learn how and when to use Ray in your projects.

Speakers
avatar for Dean Wampler

Dean Wampler

Head of Developer Relations, Domino Data Lab
Principal Software Engineer at Domino Data Lab, where I work on various aspects of the Domino platform for data scientists. Author of O'Reilly's "Programming Scala", which has a third edition forthcoming with coverage of Scala 3.


Thursday November 12, 2020 4:15pm - 4:45pm PST
cloud