Structured Dynamic Models of Meaning for Understanding Language Change and Representing Book Plots
In this talk, Dr. Lea Frermann will present two statistical models of structured meaning development. The models were defined with the goal of gaining a deeper understanding of the structure and development of meaning from raw textual data at scale and cater towards two areas of interest in the social sciences and humanities: language change over time and automatic analysis of stories.
First, she will present an unsupervised Bayesian model of diachronic change of word meaning over centuries. For example, the word ‘mouse’ was traditionally used to refer to a rodent; in recent decades, however, a second sense of the word mouse, referring to the computer pointing device, has become increasingly dominant. The Bayesian model of word meaning change infers time-specific word representations as a set of senses and their prevalence from large collections of unstructured text. Unlike previous work, we explicitly model word meaning change as a smooth, gradual process, and experimentally demonstrate the benefit of this modeling decision in a series of qualitative and quantitative experiments.
In the second part of the talk, she will focus on a model for inferring interpretable, structured multi-view representations of the plot of books. Automatically understanding the plot of novels is important both for informing literary scholarship and applications such as book summarization or recommendation. Humans select and recommend novels based on a variety of preferences (such as mood and types of featured characters, or their relations). I will present a deep recurrent
autoencoder model that learns richly structured multi-view plot representations from raw book text, approximating such preferences. We show that our learned multi-view representations yield more coherent book clusters than less structured representations and that they are interpretable, and thus useful for further literary analysis or labeling of the emerging clusters.
Lea is a postdoc at the University of Edinburgh working with Mirella Lapata. She spent summer 2017 as a visiting scholar at the Language and Cognition Lab at Stanford University. Previously she obtained a Ph.D. from the University of Edinburgh and interned at Amazon Machine Learning, Berlin. In her research, she develops machine learning methods and computational models to gain a deeper understanding of the structure and dynamics of meaning representations both in language and in humans.
We hope you will enjoy this and some our 14k+ other artificial intelligence videos. We keep adding new channels and playlists all the time, so the number of fresh videos keeps growing every day.
BTC 3KqW2c7wrhJDxAjBaywzj74mF2u5uZg665 (get a BTC wallet, get free BTC)