Simplified distributed training with tf.distribute parameter servers

Learn about a new tf.distribute strategy, ParameterServerStrategy, which enables asynchronous distributed training in TensorFlow, along with its usage with Keras APIs and custom training loop. If you have models with large embeddings or an environment with preemptible machines, this approach lets you scale your training much more easily with minimum code changes.

Distributed training with TensorFlow →
Parameter server training →

Yuefeng Zhou (Software Engineer)

Watch all Google’s Machine Learning Virtual Community Day sessions →

Subscribe to the TensorFlow channel →


product: TensorFlow – General; event: ML Community Day 2021; fullname: Yuefeng Zhou; re_ty: Publish;

Source of this TensorFlow AI Video

AI video(s) you might be interested in …