MODEL SERVING IN PYTORCH | GEETA CHAUHAN

Deploying ML models in Production and scaling your ML services still continue to be big challenge. TorchServe, the model serving solution for PyTorch solves this problem and has now evolved into a multi-platform solution that can run on-prem or on any cloud with integrations for major OSS platforms like Kubernetes, MLflow, Kubeflow Pipelines, KServe. This talk will cover new features launched in TorchServe like model interpretability using Captum, best practices for production deployments in a responsible manner, along with examples of how companies like Amazon Ads, Meta AI and broader PyTorch community are using TorchServe.

Source of this PyTorch AI Video

AI video(s) you might be interested in …