CoRL 2020, Spotlight Talk 203: Contrastive Variational Reinforcement Learning for Complex Observa…

**Contrastive Variational Reinforcement Learning for Complex Observations**
Xiao Ma (National University of Singapore)*; SIWEI CHEN (National University of Singapore); David Hsu (NUS); Wee Sun Lee (National University of Singapore)

Deep reinforcement learning (DRL) has achieved significant success in various robot tasks: manipulation, navigation, etc. However, complex visual observations in natural environments remains a major challenge. This paper presents Contrastive Variational Reinforcement Learning (CVRL), a model-based method that tackles complex visual observations in DRL. CVRL learns a contrastive variational model by maximizing the mutual information between latent states and observations discriminatively, through contrastive learning. It avoids modeling the complex observation space unnecessarily, as the commonly used generative observation model often does, and is significantly more robust. CVRL achieves comparable performance with state-of-the-art model-based DRL methods on standard Mujoco tasks. It significantly outperforms them on Natural Mujoco tasks and a robot box-pushing task with complex observations, e.g., dynamic shadows.”

YouTube Source for this Robot AI Video

AI video(s) you might be interested in …