OUTGROWING NUMPY | DOUGAL MACLAURIN

PyTorch gives us GPUs and automatic differentiation through a NumPy-like programming model: you call into well-optimized numerical kernels from a dynamic host language, Python. The NumPy model is battle-tested and well-loved, and it’s particularly well suited to automatic differentiation with GPU-style parallelism. But it’s expressiveness limitations are well known. How do we move beyond it without losing what makes it so effective? In this talk, Dougal Maclaurin (Senior Research Scientist, Google Research) describes efforts to this end over the past several years: Autograd faithfully implemented the NumPy model, JAX cautiously extended it, and Dex is now radically inverting it. Come for the scalar loops. Stay for the index types.

Source of this PyTorch AI Video

AI video(s) you might be interested in …