CoRL 2020, Spotlight Talk 324: Multi-Modal Anomaly Detection for Unstructured and Uncertain Envir…

“**Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments**
Tianchen Ji (University of Illinois at Urbana-Champaign)*; Sri Theja Vuppala (University of Illinois at Urbana-Champaign); Girish Chowdhary (University of Illinois at Urbana Champaign); Katherine Driggs-Campbell (University of Illinois at Urbana-Champaign)

To achieve high-levels of autonomy, modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision. Multi-modal sensor signals could provide more information for such anomaly detection tasks; however, the fusion of high-dimensional and heterogeneous sensor modalities remains a challenging problem. We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments. Our model leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks. The training objective unifies the generative model and the discriminative model, thus making the learning a one-stage procedure. Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations.

YouTube Source for this Robot AI Video

AI video(s) you might be interested in …