Model Interpretability with Captum – Narine Kokhilkyan

As models become ever more complex, it is increasingly important to develop new methods for model interpretability. Learn about Captum, a new tool for helping developers working in PyTorch understand why their model generates a specific output. Captum’s algorithms include integrated gradients, conductance, SmoothGrad and VarGrad, and DeepLift.

Source of this PyTorch AI Video

AI video(s) you might be interested in …