Data Science, Engineering

A 24x Speedup for Reinforcement Learning with RLlib + Ray

Author: Raoul Khouri (Two Sigma)

Presented at: Ray Summit 2021

Abstract: Training a reinforcement learning (RL) agent is compute intensive. Under classical deep learning assumptions bigger and better GPUs reduce training time. However, for RL, bigger and better GPUs do not always lead to reduced training time. In practice, RL can require millions of samples from a relatively slow and CPU-only environment leading to a bottleneck in training that GPUs do not solve. Empirically, we find that training agents with RLlib removes this bottleneck because its Ray integration allows scaling to many CPUs across a cluster of commodity machines. This talk details how such scaling can cut training wall-time down by orders of magnitude.

Download slides

This article is not an endorsement by Two Sigma of the papers discussed, their viewpoints or the companies discussed. The views expressed above reflect those of the authors and are not necessarily the views of Two Sigma Investments, LP or any of its affiliates (collectively, “Two Sigma”). The information presented above is only for informational and educational purposes and is not an offer to sell or the solicitation of an offer to buy any securities or other instruments. Additionally, the above information is not intended to provide, and should not be relied upon for investment, accounting, legal or tax advice. Two Sigma makes no representations, express or implied, regarding the accuracy or completeness of this information, and the reader accepts all risks in relying on the above information for any purpose whatsoever. Click here for other important disclaimers and disclosures.