Hierarchical and Continual RL with Doina Precup – #567

Today we’re joined by Doina Precup, a research team lead at DeepMind Montreal, and a professor at McGill University. In our conversation with Doina, we discuss her recent research interests, including her work in hierarchical reinforcement learning, with the goal being agents learning abstract representations, especially over time. We also explore her work on reward specification for RL agents, where she hypothesizes that a reward signal in a complex environment could lead an agent to develop attributes of intuitive intelligence. We also dig into quite a few of her papers, including On the Expressivity of Markov Reward, which won a NeruIPS 2021 outstanding paper award. Finally, we discuss the analogy between hierarchical RL and CNNs, her work in continual RL, and her thoughts on the evolution of RL in the recent past and present, and the biggest challenges facing the field going forward.

The complete show notes for this episode can be found at https://twimlai.com/go/567

Subscribe:

Apple Podcasts:
https://tinyurl.com/twimlapplepodcast
Spotify:
https://tinyurl.com/twimlspotify
Google Podcasts:
https://podcasts.google.com/?feed=aHR0cHM6Ly90d2ltbGFpLmxpYnN5bi5jb20vcnNz
RSS:
https://feeds.megaphone.fm/MLN2155636147
Full episodes playlist:

Subscribe to our Youtube Channel:
https://www.youtube.com/channel/UC7kjWIK1H8tfmFlzZO-wHMw?sub_confirmation=1

Podcast website:


Sign up for our newsletter:

Newsletter Sign-Up


Check out our blog:

Blog


Follow us on Twitter:

Follow us on Facebook:
https://facebook.com/twimlai
Follow us on Instagram:
https://instagram.com/twimlai

YouTube Source for this AI Video

AI video(s) you might be interested in …