Hedvig Kjellström: Learning compositional, structured, and interpretable models of the world

Hedvig Kjellström is a professor in the Division of Robotics, Perception and Learning, Department of Intelligent Systems at KTH Royal Institute of Technology, Sweden, and Principal AI Scientist at Silo AI, Sweden. This talk was part of the colloquium of the Cluster of Excellence “Machine Learning: New Perspectives for Science”.

Abstract: Despite their fantastic achievements in fields such as computer vision and natural language processing, state-of-the-art deep learning approaches differ from human cognition in fundamental ways. While humans can learn new concepts from just a single or few examples, and effortlessly extrapolate new knowledge from concepts learned in other contexts, deep learning methods generally rely on large amounts of data for their learning. Moreover, while humans can make use of contextual knowledge of e.g. laws of nature and insights into how others reason, such information is generally hard to exploit in deep learning methods.

Current deep learning approaches are indeed purposeful for a wide range of applications where there are large volumes of training data and/or well defined problem settings. However, models that learn in a more human-like manner have the potential to be more adaptable to new situations, be more data efficient and also more interpretable to humans – a desirable property e.g. for intelligence augmentation applications with a human in the loop, e.g. medical decision support systems or social robots.

In this talk she will describe a number of projects in her group where they explore disentanglement, temporality, multimodality, and cause-effect representations to accomplish compositional, structured, and interpretable models of the world.

Source of this “Tübingen Machine Learning” AI Video

AI video(s) you might be interested in …