Scaling Python for Machine Learning: Beyond Data Parallelism
You need to be signed in to add a collection
Data Parallelism can be amazing and it frees us from so many fiddly complicated tasks (like dealing with locks). On the other hand, as training large machine learning models becomes increasingly popular, we're seeing the need to move beyond purely data-parallel techniques. Depending on recompute exclusively for failure is no longer sufficient as our operations are not idempotent. In this talk we will look at Spark, Dask, and Ray in the context of scaling machine learning models and how you can take advantage of other types of distributed parallelism (including the actor model for managing model weights during training).
Transcript
Data Parallelism can be amazing and it frees us from so many fiddly complicated tasks (like dealing with locks). On the other hand, as training large machine learning models becomes increasingly popular, we're seeing the need to move beyond purely data-parallel techniques. Depending on recompute exclusively for failure is no longer sufficient as our operations are not idempotent.
In this talk we will look at Spark, Dask, and Ray in the context of scaling machine learning models and how you can take advantage of other types of distributed parallelism (including the actor model for managing model weights during training).