#032- Simon Kornblith / GoogleAI – SimCLR and Paper Haul!
This week Dr. Tim Scarfe, Sayak Paul and Yannic Kilcher speak with Dr. Simon Kornblith from Google Brain (Ph.D from MIT). Simon is trying to understand how neural nets do what they do. Simon was the second author on the seminal Google AI SimCLR paper. We also cover “Do Wide and Deep Networks learn the same things?”, “Whats in a Loss function for Image Classification?”, and “Big Self-supervised models are strong semi-supervised learners”. Simon used to be a neuroscientist and also gives us the story of his unique journey into ML.
00:00:00 Show Teaser / or “short version”
00:18:34 Show intro
00:22:11 Relationship between neuroscience and machine learning
00:29:28 Similarity analysis and evolution of representations in Neural Networks
00:39:55 Expressability of NNs
00:42:33 Whats in a loss function for image classification
00:46:52 Loss function implications for transfer learning
00:50:44 SimCLR paper
01:00:19 Contrast SimCLR to BYOL
01:01:43 Data augmentation
01:06:35 Universality of image representations
01:09:25 Universality of augmentations
01:25:09 GANs for data augmentation??
01:26:50 Julia language
Pod version: https://anchor.fm/machinelearningstreettalk/episodes/032–Simon-Kornblith–GoogleAI—SimCLR-and-Paper-Haul-endpa3
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
What’s in a Loss Function for Image Classification?
A Simple Framework for Contrastive Learning of Visual Representations
Big Self-Supervised Models are Strong Semi-Supervised Learners