100x Improvements in Deep Learning Performance with Sparsity with Subutai Ahmad – #562

Today we’re joined by Subutai Ahmad, VP of research at Numenta. While we’ve had numerous conversations about the biological inspirations of deep learning models with folks working at the intersection of deep learning and neuroscience, we dig into uncharted territory with Subutai. We set the stage by digging into some of fundamental ideas behind Numenta’s research and the present landscape of neuroscience, before exploring our first big topic of the podcast: the cortical column. Cortical columns are a group of neurons in the cortex of the brain which have nearly identical receptive fields; we discuss the behavior of these columns, why they’re a structure worth mimicing computationally, how far along we are in understanding the cortical column, and how these columns relate to neurons.

We also discuss what it means for a model to have inherent 3d understanding and for computational models to be inherently sensory motor, and where we are with these lines of research. Finally, we dig into our other big idea, sparsity. We explore the fundamental ideals of sparsity and the differences between sparse and dense networks, and applying sparsity and optimization to drive greater efficiency in current deep learning networks, including transformers and other large language models.

The complete show notes for this episode can be found at https://twimlai.com/go/562


Apple Podcasts:
Google Podcasts:
Full episodes playlist:

Subscribe to our Youtube Channel:

Podcast website:

Sign up for our newsletter:

Newsletter Sign-Up

Check out our blog:


Follow us on Twitter:

Follow us on Facebook:
Follow us on Instagram:

YouTube Source for this AI Video

AI video(s) you might be interested in …