Parallelism and Acceleration for Large Language Models with Bryan Catanzaro – #507

Today we’re joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.

Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high performance computing and it’s recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure.

We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it’s playing in the present and future of game development via ray tracing.

The complete show notes for this episode can be found at twimlai.com/go/507.

Subscribe:

Apple Podcasts:
https://tinyurl.com/twimlapplepodcast
Spotify:
https://tinyurl.com/twimlspotify
Google Podcasts:
https://podcasts.google.com/?feed=aHR0cHM6Ly90d2ltbGFpLmxpYnN5bi5jb20vcnNz
RSS:
https://twimlai.libsyn.com/rss
Full episodes playlist:

Subscribe to our Youtube Channel:
https://www.youtube.com/channel/UC7kjWIK1H8tfmFlzZO-wHMw?sub_confirmation=1

Podcast website:


Sign up for our newsletter:

Newsletter Sign-Up


Check out our blog:

Blog


Follow us on Twitter:

Follow us on Facebook:
https://facebook.com/twimlai
Follow us on Instagram:
https://instagram.com/twimlai

YouTube Source for this AI Video

AI video(s) you might be interested in …