Rich Caruana (MSR): Friends Don’t Let Friends Deploy Black-Box Models
Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility/Transparency in Machine Learning
In machine learning often a tradeoff must be made between accuracy and intelligibility: the most accurate models (deep nets, boosted trees and random forests) usually are not very intelligible, and the most intelligible models (logistic regression, small trees and decision lists) usually are less accurate. This tradeoff limits the accuracy of models that can be safely deployed in mission-critical applications such as healthcare where being able to understand, validate, edit, and ultimately trust a learned model is important. We have developed a learning method that is often as accurate as full complexity models, but even more intelligible than linear models. This makes it easy to understand what a model has learned, and also makes it easier to edit the model when it learns inappropriate things. In this talk I’ll present a healthcare case study where these high-accuracy models discover surprising patterns in the data that would have made deploying a black-box model risky. I’ll also show how we’re using these models to detect bias in social domains where fairness and transparency are paramount.
We hope you will enjoy this and some our 14k+ other artificial intelligence videos. We keep adding new channels and playlists all the time, so the number of fresh videos keeps growing every day.
BTC 3KqW2c7wrhJDxAjBaywzj74mF2u5uZg665 (get a BTC wallet, get free BTC)