# The Tea Time Talks with Sina Ghiassian (Aug 23, 2018)

In the latest installment in the Tea Time Talks, Amii student Sina Ghiassian asks Can coarse coding solve the catastrophic interference problem?

Reinforcement learning systems must use function approximation to solve complicated problems. Neural nets provide an effective architecture for nonlinear function approximation and have been used since the early days of reinforcement learning. Neural networks, however, suffer from the catastrophic interference problem and cannot learn online in a fully incremental fashion. Experience replay buffers have been used to work around the interference problem but the search for methods that can learn in a fully incremental manner continues. This talk introduces a new method, a simple combination of coarse coding and neural networks, that might be useful in solving the interference problem. Our method is capable of learning fast, in a fully incremental fashion

—

The Tea Time Talks are a series of talks primarily given by the students and faculty studying Artificial Intelligence at the University of Alberta, and provide a comfortable, informal space in which to listen and learn about topics pertaining to machine intelligence and machine learning.

We hope you will enjoy this and some our 14k+ other **artificial intelligence videos**. We keep adding new channels and playlists all the time, so the number of fresh videos keeps growing every day.

Support this Website with Crypto, Thanks!

**BTC** 3KqW2c7wrhJDxAjBaywzj74mF2u5uZg665 (get a BTC wallet, get free BTC)

**ETH** 0x551f99A533da75C0efD89456257CAE214d254ffD

**LTC** MWo25vKazghNiQ73ffSL3RKhLtxX4JNrZa

**BCH** qzmtmaw32etnnpx6hawvvyrvdz7xs7k82quex5kyag