CURL: Contrastive Unsupervised Representations for Reinforcement Learning

According to Yann Le Cun, the next big thing in machine learning is unsupervised learning. Self-supervision has changed the entire game in the last few years in deep learning, first transforming the language world with word2vec and BERT — but now it’s turning computer vision upside down.

This week Yannic, Connor and I spoke with one of the authors, Aravind Srinivas who recently co-led the hot-off-the-press CURL: Contrastive Unsupervised Representations for Reinforcement Learning alongside Michael (Misha) Laskin. CURL has had an incredible reception in the ML community in the last month or so. Remember the Deep Mind paper which solved the Atari games using the raw pixels? Aravind’s approach uses contrastive unsupervised learning to featurise the pixels before applying RL. CURL is the first image-based algorithm to nearly match the sample-efficiency and performance of methods that use state-based features! This is a huge step forwards in being able to apply RL in the real world.

We explore RL and self-supervision for computer vision in detail and find out about how Aravind got into machine learning.

Paper:
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
Aravind Srinivas, Michael Laskin, Pieter Abbeel
https://arxiv.org/pdf/2004.04136.pdf

Yannic’s analysis video: https://www.youtube.com/watch?v=hg2Q_O5b9w4

#machinelearning #reinforcementlearning #curl #timscarfe #yannickilcher #connorshorten

Music credit; https://soundcloud.com/errxrmusic/in-my-mind

YouTube Source for this AI Video

AI video(s) you might be interested in …

Comment on this AI video …

Your email address will not be published. Required fields are marked *