How Large of A Replay Buffer Do You Need? A Deeper Look at Experience Replay | Paper Analysis & Code

The size of the experience replay buffer is usually taken for granted. In this recent paper by Sutton and Zhang, they take a look at the effects of the size of the replay buffer on the performance of deep Q learning. Better yet, they create a new type of memory called “Combined Experience Replay”. Can we replicate their results? Let’s try in this video.

The paper I’m talking about is here:
https://arxiv.org/abs/1712.01275

The code for this is here:
https://github.com/philtabor/Youtube-Code-Repository/tree/master/ReinforcementLearning/CombinedExperienceReplay

Learn how to turn deep reinforcement learning papers into code:

Deep Q Learning:
https://www.udemy.com/course/deep-q-learning-from-paper-to-code/?couponCode=DQN-OCT-21

Actor Critic Methods:
https://www.udemy.com/course/actor-critic-methods-from-paper-to-code-with-pytorch/?couponCode=AC-OCT-21

Curiosity Driven Deep Reinforcement Learning
https://www.udemy.com/course/curiosity-driven-deep-reinforcement-learning/?couponCode=ICM-OCTOBER-21

Natural Language Processing from First Principles:
https://www.udemy.com/course/natural-language-processing-from-first-principles/?couponCode=NLP1-OCT-21

Reinforcement Learning Fundamentals
https://www.manning.com/livevideo/reinforcement-learning-in-motion

Here are some books / courses I recommend (affiliate links):
Grokking Deep Learning in Motion: https://bit.ly/3fXHy8W
Grokking Deep Learning: https://bit.ly/3yJ14gT
Grokking Deep Reinforcement Learning: https://bit.ly/2VNAXql

Come hang out on Discord here:
https://discord.gg/Zr4VCdv

Website: https://www.neuralnet.ai
Github: https://github.com/philtabor
Twitter: https://twitter.com/MLWithPhil

Source of this AI Video

AI video(s) you might be interested in …