AI’s Legal and Ethical Implications with Sandra Wachter – 521

Today we’re joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford.

Sandra’s work lies at the intersection of law and AI-focused on what she likes to call “algorithmic accountability”. In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness, and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they’re created. We also explore why factors like the lack of oversight lead to poor self-regulation and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon.

The complete show notes for this episode can be found at https://twimlai.com/go/521.

Subscribe:

Apple Podcasts:
https://tinyurl.com/twimlapplepodcast
Spotify:
https://tinyurl.com/twimlspotify
Google Podcasts:
https://podcasts.google.com/?feed=aHR0cHM6Ly90d2ltbGFpLmxpYnN5bi5jb20vcnNz
RSS:
https://twimlai.libsyn.com/rss
Full episodes playlist:

Subscribe to our Youtube Channel:
https://www.youtube.com/channel/UC7kjWIK1H8tfmFlzZO-wHMw?sub_confirmation=1

Podcast website:


Sign up for our newsletter:

Newsletter Sign-Up


Check out our blog:

Blog


Follow us on Twitter:

Follow us on Facebook:
https://facebook.com/twimlai
Follow us on Instagram:
https://instagram.com/twimlai

#Algorithms #counterfactual explanations #discrimination #fairness #data protection #alan turing institute #conditional demograpic disparity

YouTube Source for this AI Video

AI video(s) you might be interested in …