Fairness and Robustness in Federated Learning with Virginia Smith – #504

Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University.

In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness.

We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting.

Subscribe:

Apple Podcasts:
https://tinyurl.com/twimlapplepodcast
Spotify:
https://tinyurl.com/twimlspotify
Google Podcasts:
https://podcasts.google.com/?feed=aHR0cHM6Ly90d2ltbGFpLmxpYnN5bi5jb20vcnNz
RSS:
https://twimlai.libsyn.com/rss
Full episodes playlist:

Subscribe to our Youtube Channel:
https://www.youtube.com/channel/UC7kjWIK1H8tfmFlzZO-wHMw?sub_confirmation=1

Podcast website:


Sign up for our newsletter:

Newsletter Sign-Up


Check out our blog:

Blog


Follow us on Twitter:

Follow us on Facebook:
https://facebook.com/twimlai
Follow us on Instagram:
https://instagram.com/twimlai

YouTube Source for this AI Video

AI video(s) you might be interested in …