PyTorch Pruning | How it's Made by Michela Paganini

Michela Paganini, Postdoctoral Researcher at Facebook AI, shares her personal experience creating a core PyTorch feature: Pruning (torch.nn.utils.prune). In this talk, you will learn about pruning, why it’s important and how to get started.

State-of-the-art deep learning techniques rely on over-parametrized models that are hard to deploy. On the contrary, biological neural networks are known to use efficient sparse connectivity. Identifying optimal techniques to compress models by reducing the number of parameters in them is important in order to reduce memory, battery, and hardware consumption without sacrificing accuracy, deploy lightweight models on device, and guarantee privacy with private on-device computation. On the research front, pruning is used to investigate the differences in learning dynamics between over-parametrized and under-parametrized networks, to study the role of lucky sparse subnetworks and initializations (“lottery tickets”) as a destructive neural architecture search technique, and more.

Haven’t signed up yet? Get involved, and learn how you could build with the community and also have a chance to win up to $25,000:

Source of this PyTorch AI Video

AI video(s) you might be interested in …