Titles and abstracts from our external speaker seminars.
Title: An Alternative Conception of Compositional Distributional Semantics
Abstract: In this talk I will present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees (APTs). For example, the phrase “swept the wooden floor” provides not only evidence that floors can be wooden and that floors can be swept, as in traditional dependency based approaches to distributional semantics, but also that things which are wooden can be swept. By maintaining higher order grammatical structure within our distributional representations and aligning structures before composition, we can define composition operations which are sensitive to syntax. For example, “the floor swept” has a very different representation to “swept the floor”, due to the different grammatical relation, and might even be deemed implausible if there is little evidence in the corpus of floors performing actions. Within our framework, composition mutually disambiguates constituents whilst providing a structured representation which can be further composed with other elements. The uniform nature of the representation means that it is possible to compare uncontextualised lexemes, contextualised lexemes, phrases and sentences using the same similarity measure. Finally, I will present some results on a compositionality detection task for noun compounds where we achieve state-of-the-art results using the APT framework.
About the speaker:
Title: Robust Semantics for Natural Language Processing
Abstract: The most significant obstacle to question-answering using high-preformance wide coverage parsers over massive amounts of open-domain text is that, while the answers to questions like “Who wrote ‘What Makes Sammy Run?'” are out there on the web, they are not stated in the form suggested by the question, “Budd Schulberg wrote `What Makes Sammy Run?'”, but in some other form that paraphrases or entails the answer, such as “Budd Schulberg’s `What Makes Sammy Run?'”. Despite the best efforts of linguistics and computational linguists, standard semantics as we know it is not provided in a form that supports practical inference over the variety of expression we see in real text. This impasse has given rise to a number of proposals for a “distributional” semantics induced from text data.
The talk discusses recent work with Mike Lewis which seeks to define a novel form of semantics for content words using semi-supervised machine learning methods over unlabeled text. True paraphrases are represented by the same semantic constant. Common-sense entailment is represented directly in the lexicon, rather than delegated to meaning postulates and theorem-proving. The method can be applied cross-linguistically, in support of machine translation. Ongoing work extends the method to extract an aspect-based semantics for temporal entailment. This representation of content has interesting implications concerning the nature of the hidden language-independent language of mind that must underlie all natural languages, but which has so far proved resistant to discovery.
About the speaker: https://en.wikipedia.org/wiki/Mark_Steedman
Personal website: http://homepages.inf.ed.ac.uk/steedman/
Title: Customizing Automated Story Generator for Constructing Plots for Musicals
Abstract: Our research efforts in Computational Creativity over the years had lead to the development of an automatic generator of Russian folk tales based on Vladimir Propp’s Morphology of the Folk Tale. While presenting a demo of this software at Queen Mary’s Octagon last April at the Computational Extravaganza organised by the PROSECCO project, a TV production company, Wingspan, put to us a challenge: could this software be adapted to generate plots for musicals? The musical “Beyond the Fence” opens at the Arts Theatre in the West End on Monday 22nd February. The narrative structure of its plot was generated by our PropperWryter software. My talk will cover the basics of how the original software worked, what modifications where required to adapt it to generate plot for musicals, and the efforts at knowledge resource generation that were required.
About the speaker: Pablo Gervás is an Associate Professor (Profesor Titular de Universidad) at the Departamento de Ingeniería del Software e Inteligencia Artificial, Facultad de Informática, Universidad Complutense de Madrid. His research interests are focused in studying the role of narrative in human communication, with a view to applying it in human-computer interaction. He is currently active in the following lines of research: natural language generation; natural language analysis; NLP for accessibility; NLP and literary artifacts.
Personal website: http://nil.fdi.ucm.es/index.php?q=node/92
Title: Evaluating Distributional Models of Compositional Semantics
Abstract: The most commonly used measure of a distributional semantic model’s performance to date has been the degree to which it agrees with human-provided phrase similarity scores. In this talk I argue that existing intrinsic evaluations are unreliable as they make use of small and subjective gold-standard data sets and assume a notion of similarity that is independent of a particular application. Therefore, they do not necessarily measure how well a model performs in practice. I study four commonly used intrinsic datasets and demonstrate that all of them exhibit undesirable properties. Second, I propose a novel framework within which to compare word- or phrase-level DMs in terms of their ability to support document classification. My approach couples a classifier to a DM and provides a setting where classification performance is sensitive to the quality of the DM. Third, I present an empirical evaluation of several methods for building word representations and composing them within my framework. I find that the determining factor in building word representations is data quality rather than quantity; in some cases only a small amount of unlabelled data is required to reach peak performance. Neural algorithms for building single-word representations perform better than counting-based ones regardless of what composition is used, but simple composition algorithms can outperform more sophisticated competitors.
Title: From quantum foundations to natural language meaning via diagrams.
Abstract: Earlier work on an entirely diagrammatic formulation of quantum theory, which is soon to appear in the form of a textbook , has somewhat surprisingly guided us towards an answer for the following question : how do we produce the meaning of a sentence given that we understand the meaning of its words? This work has practical applications in the area of natural language processing, and the resulting tools have meanwhile outperformed existing methods.
This talk requires no background in quantum theory, nor in linguistics, nor in category theory.
 B. Coecke & A. Kissinger (2016, 900 pages) Picturing Quantum Processes. A first course on quantum theory and diagrammatic reasoning. Cambridge University Press.
 B. Coecke, M. Sadrzadeh & S. Clark (2010) Mathematical foundations for a compositional distributional model of meaning. arXiv:1003.4394
About the speaker: Bob Coecke is Professor of Quantum Foundations, Logics and Structures at the Department of Computer Science of Oxford University, where he leads the 50 member interdisciplinary Quantum Group. His research stretches from the foundations and the mathematical formulation of quantum theory to computational linguistics and cognition, using tools from computer science such as logic and category theory.
Title: Grammaticality, Acceptability and Probability: Some Modeling Experiments
Abstract: A central theoretical problem in linguistics and cognitive science is whether knowledge of language is best viewed probabilistically or categorically——as a probability distribution over structures or as a set of well-formed structures generated by a formal grammar.
Acceptability judgment data present a problem for a purely probabilistic view as they cannot be reduced to probabilities directly: here we discuss some methods that show how they can be naturally modelled using various scoring functions, that correct for sentence length and word frequency.
I present recent experimental work using a wide variety of unsupervised language models. For test data we use corpus sentences that have had errors introduced through round-trip translation, and are then judged for acceptability using Amazon Mechanical Turk. Some of the models and scoring functions produce encouraging correlations with the human judgements. Our results provide experimental support for the view that syntactic knowledge is represented as a probabilistic system, rather than as a classical formal grammar.
(Joint work with Jey Han Lau and Shalom Lappin)
About the speaker: Alexander Clark is a lecturer in the Department of Philosophy, King’s College London. His research interests relate to unsupervised learning of natural language, and its relevance to first language acquisition. He approaches this both theoretically and practically: trying to define what a good definition of learnability is, trying to prove that you can learn languages according to various models of learnability, designing algorithms, and writing computer programs that can learn models of language both from synthetic and natural examples.
Title:A Proof-Theoretic Approach to Composition in Distributional Models of Meaning
Abstract: Distributional Compositional Categorical Models of Meaning (or DisCoCat) by Coecke et al. (2010) employs heavy machinery from mathematics to enable compositionality to be inserted into a standard distributional setting. Compositionality here is a homomorphism/functor sending the operations/natural transformations of a syntactic algebra to the operations/linear maps of a semantic algebra. Work is currently focused on explaining several semantic phenomena in a satisfying way using such models, e.g. word/sentence ambiguity, quantifier scope ambiguity.
During this talk, I will elaborate on the syntactic side of the models. I will give the general architecture of a DisCoCat model, and how can we use different techniques coming from categorial grammar to account for generative issues, as well as quantifier scope ambiguity. If time permits, we show some examples of the diagrammatic language that has been developed to reason categorically about categorial grammar.
Suitable for all backgrounds.
Title: BABBLE: Automatically inducing incremental dialogue systems from minimal data
Abstract: We present a method for inducing incremental dialogue systems from very small amounts of dialogue data, avoiding the use of dialogue acts. This is achieved by combining an incremental, semantic grammar formalism – Dynamic Syntax and Type Theory with Records (DS-TTR) – with Reinforcement Learning for word (action) selection, where language generation and dialogue management are treated as a joint decision/optimisation problem, and where the MDP model is constructed automatically. We show, using an implemented system, that this method enables a wide range of dialogue variations to be automatically captured, even when the system is trained from only a single dialogue. The variants include question-answer pairs, over- and under-answering, self- and other-corrections, clarification interaction, split-utterances, and ellipsis. For example, we show that a single training dialogue supports over 8000 new dialogues in the same domain. This generalisation property results from the structural knowledge and constraints present within the grammar, and highlights in-principle limitations of recent state-of-the-art systems that are built using machine learning techniques only.
Dimitri Kartsaklis, Matthew Purver, and Mehrnoosh Sadrzadeh (Queen Mary University of London), 15 November 2016
Title: Verb Phrase Ellipsis using Frobenius Algebras in Categorical Compositional Distributional Semantics
Abstract: We sketch the basis for a categorical compositional distributional semantic approach to the analysis of verb phrase ellipsis. Based on previous work on compositional reasoning with Frobenius Algebras, we show how the handling of the ellipsis can be hard-wired in the structure of a coordinator, providing a linear-algebraic equivalent of the fact that the sentence “John sleeps, and Bill does too” entails that “John and Bill sleep”.
Title: Stretching the Meaning of Words: Context-Sensitive Lexical Semantics and Compositionality
Abstract: In this talk I will first present linguistic evidence why we need a context-sensitive model of lexical semantics to account for how lexical information, cognitive knowledge, pragmatic inference and compositionality interact in language use. This linguistic evidence focuses on different polysemy types and meaning modulations we can detect in language, particularly in verb-argument composition. Based on empirical data, I will argue that an account which stressed the interplay between lexical meaning and concepts, preserving at the same time the distinction between the two, is more likely to succeed in helping us understand of how the dimensions above interact. Finally, I will address the question of how vector-based analyses of language can be useful to gain evidence-based insight on the relational structure of the mental lexicon, its interplay with conceptual knowledge, and the way meaning is built compositionally using words and their associated information as basic blocks. I will address the puzzle of the representation of verbs and verb classes in vector space models, claiming that tensor representations are not sufficient to represent the semantic contribution of verbs in predicative use.
Jezek Elisabetta, 2016. The Lexicon: An Introduction, Oxford, Oxford University Press.
Title: Exploring the nesting of Dynamic Syntax within the Predictive Processing Perspective
Abstract: In this talk, I will take as the point of departure the parallelism between the general dynamics of the Clark Predictive Processing Perspective, and Dynamic syntax. I will then go back from there to the justification for Dynamic Syntax (DS) as a grammar formalism illustrating the underspecification+update characteristic of the DS account of word order variation, bringing out how the concept of growth grounded in the underpinning tree logic imposes limits on such variation, and showing how the potential for distribution of dependencies across more than one speaker is an immediate consequence of the DS framework. If we get through all that lot, we might reflect on the significance of nesting DS within the general PP perspective — the title of the talk from which this abstract is derived was “Language: the tool for interaction — Surfing Uncertainty together”.
Title: A Compositional Distributional Inclusion Hypothesis
Abstract: The distributional inclusion hypothesis provides a pragmatic way of evaluating entailment between word vectors as represented in a distributional model of meaning. In this paper, we extend this hypothesis to the realm of compositional distributional semantics, where meanings of phrases and sentences are computed by composing their word vectors. We present a theoretical analysis for how feature inclusion is interpreted under each composition operator, and propose a measure for evaluating entailment at the phrase/sentence level. We perform experiments on four entailment datasets, showing that intersective composition in conjunction with our proposed measure achieves the highest performance.
Title: Sensing well-being using heterogeneous smartphone data and stance
identification in social media conversations
Abstract: In the first part of my talk I will describe a new problem of predicting affect and well-being scales in a real-world setting of heterogeneous, longitudinal and non-synchronous textual as well as non-linguistic data that can be harvested from on-line media and mobile phones. We describe the method for collecting the heterogeneous longitudinal data, how features are extracted to address missing information and differences in temporal alignment, and how the latter are combined using multi-kernel learning to yield promising predictions of affect and well-being. In the second part of my talk I will discuss rumour stance classification as a sequential task. Rumour stance classification, the task that determines if each tweet in a collection discussing a rumour is supporting, denying, questioning or simply commenting on the rumour, has been attracting substantial interest. We introduce a novel approach that makes use of the sequence of transitions observed in tree-structured conversation threads in Twitter. The conversation threads are formed by harvesting users’ replies to one another, which results in a nested tree-like structure. Previous work addressing the stance classification task has treated each tweet as a separate unit. Here we analyse tweets by virtue of their position in a sequence and test two sequential classifiers, Linear-Chain CRF and Tree CRF, each of which makes different assumptions about the conversational structure. We experiment with eight Twitter datasets, collected during breaking news, and show that exploiting the sequential structure of Twitter conversations achieves significant improvements over the non-sequential methods.
Title: Modalities and Polarities in Categorial Grammar Logics
Abstract: Syntactic derivations, in categorial grammar, are proofs in some source logic (Lambek calculus, pregroup grammar, …); these proofs are sent to their target interpretation (set-theoretic, vector-based, …) by means of a compositional translation: a mapping that respects the types and the operations (inference rules) of the source in the transition to the target.
To find the proper division of labour between syntactic source and semantic target, one has to deal with issues of expressivity and control: how can one make sure that there are enough source derivations to obtain the intended interpretations in a compositional way, and how can one avoid that there are too many? I will discuss the interplay between two complementary strategies that have been proposed to tackle these issues: *modalities* to control word order and constituent structure, and *polarities* for procedural control over the derivation process.
Title: Coreference Resolution for Summarization, and Discussion of Work in Progress
Abstract: When reading a text, humans can usually identify the character that a mention is referring to without any trouble. There are many freely available systems which attempt to do this automatically, some of which claim to have high degrees of accuracy. With the original aim of using such systems for work on summarization, my results of comparing two of the leading systems suggest that the evaluation metrics typically used to measure their success do not give a true indication of their usefulness for practical tasks such as question answering and summarization, nor do they predict the better of the two systems for these tasks.
I would like to spend the time discussing the work above – hopefully getting some feedback for turning it into a paper, as well as current work on detecting the intentions of characters in stories. There is a signifficant amount of work stating the importance of characters’ goals for the summarization and retelling of stories, but I have been unable to find previous work actually doing this and establishing a method of evaluation.
Title: Looking at Similarity in Context Using a Dynamically Contextualised Distributional Model
Abstract: My main interest is “meaning fluidity”, how words’ meanings change dynamically in human linguistic exchanges and how surprisingly well humans are able to adapt to it and understand each other. My plan is to present my current project where I try to look at similarity in context using Stephen’s McGregor’s dynamically contextualized model and future plans for an empirical study with people assessing similarity of words within the contexts of sentences or paragraphs.
Marcus Pearce (Queen Mary), 28 February, 2017
Title: Predictive Processing of Music: Expectation, Uncertainty and Aesthetics
Abstract: Expectation is a general-purpose cognitive mechanism with strong-implications for survival. With respect to music cognition, expectation has long been though to be important in both structural perception and aesthetic response. I will present an approach that accounts for expectation in terms of statistical learning and probabilistic prediction which is being investigated with a combination of computational, psychological and neuroscientific methods.
Massimo Poesio (University of Essex), 7 March, 2017
Title: What Crowdsourcing Tells Us About Cognition: The Case of Anaphora
Abstract: Crowdsourcing is usually seen primarily as a inexpensive and quick way of creating large resources for a variety of AI tasks. However, our work with Phrase Detectives, a game-with-a-purpose designed to collect data about anaphora, suggests that collecting large numbers of judgments about very large amounts of data also tells us a lot about the extent to which human subjects agree or disagree about the interpretation of such data. In the talk I will introduce Phrase Detectives and discuss our results and their implications.
Ekaterina Shutova (University of Cambridge), 14 March, 2017
Title: Modelling Metaphor with Linguistic and Visual Features
Abstract: Besides making our thoughts more vivid and filling our communication with richer imagery, metaphor plays a fundamental structural role in our cognition, helping us organise and project knowledge. For example, when we say “a well-oiled political machine”, we view the concept of political system in terms of a mechanism and transfer inferences from the domain of mechanisms onto our reasoning about political processes. Highly frequent in text, metaphorical language represents a significant challenge for natural language processing (NLP) systems; and large-scale, robust and accurate metaphor processing tools are needed to improve the overall quality of semantic interpretation in today’s language technology. In this talk I will introduce statistical models of metaphor processing and discuss how statistical techniques can be applied to identify patterns of the use of metaphor in linguistic data and to generalise its higher-level mechanisms from text. I will then present a metaphor processing method that simultaneously draws knowledge from linguistic and visual data and discuss the ways in which it informs the study of cognition.
Aurelie Herbelot (University of Trento), 21 March, 2017
Title: High-Risk Learning: Acquiring Concepts and Things from Tiny Data
Abstract: Humans are able to grasp the meaning of a new word extremely rapidly: often, a single sentence suffices for an educated guess. In a similar fashion, they can build a complex picture of a particular person or object from very reduced information. This extraordinary ability is still out of reach for state-of-the-art computational systems. Whilst the field of distributional semantics has made much progress in modelling the meaning of words and their composition, current systems still require exposure to huge corpora to simulate basic human semantic judgments.
Clyde Ancarno, King’s College London
Rebecca Jones, University of Birmingham
Title: Corpus Linguistics, Anthropology and Inter-Religious Relations in Multi-Religious Contexts
Abstract The aim of our talk is to both introduce corpus linguistics and to explore some of its interdisciplinary applications. In the first half, we will introduce the field of corpus linguistics and its tools and techniques. In doing so, we will focus on research where corpus linguistics informs research carried out by non-linguists or intersects with other disciplines in particular. In the second half, our case study will report on a corpus-assisted discourse analysis of anthropological survey data (2,819 respondents in total) gathered as part of a project focussing on inter-religious relations in Yoruba-speaking parts of southwest Nigeria: ‘Knowing each other: Everyday religious encounters, social identities and tolerance in southwest Nigeria’. Using a range of corpus outputs, we will explore the differences and/or similarities in what our Nigerian survey participants say about Christianity and Islam and their practitioners. Our findings will reveal a region-specific, albeit powerful and timely, understanding of inter-religious relations based on mutual and equal engagement, reciprocation and co-operation with people of different religion.
Stephen Pulman, University of Oxford
Title: Sentiment Analysis for Fun and Profit
Abstract: A non-technical overview of work in our group over the last ten years in sentiment analysis and related techniques. I’ll also describe various practical applications of these technologies, some successful, some less so, in a variety of different areas: sports gambling, politics, conversational agents, health care monitoring, and financial market prediction.
Bio: Stephen Pulman is Professor of Computational Linguistics at Oxford University’s Department of Computer Science and founder of TheySay. Professorial Fellow of Somerville College, Oxford, and a Fellow of the British Academy. Formerly Reader in Computational Linguistics at the University of Cambridge Computer Laboratory, and also for 9 years Director of an SRI International research centre in Cambridge.
Elizabeth Black, King’s College London
Steffen Zschaler, King’s College London
Title: Can we use Agent Dialogue as a Tool for Capturing Software Design Discussions?
Software design is an important creative step in the engineering of software systems, yet we know surprisingly little about how humans actually do it. While it has been argued before that there is a need for formal frameworks to help capture design dialogues in a format amenable to analysis, there is almost no work that actually attempts to do so. In this talk, we will report on our initial attempts in this direction, exploring the application of concepts from agent dialogues to the description of actual design dialogues between human software designers. We have found that this can be done in principle and will present a set of dialogue moves that we have found useful in the coding of an example dialogue. Through this formulation of the dialogue, we were able to identify some interesting initial patterns of moves and dialogue structures. More importantly, we believe that such a representation of design dialogues can provide a useful basis for a better understanding of how designers interact. However, lots of questions remain, not least how to square the rigidity inherent in formal frameworks with the almost infinite flexibility of human dialogue. We hope that our talk will trigger interesting discussions with colleagues in your group to help us move this research forward.
Julian Hough, QMUL
Title: Deep Learning Approaches to Incremental Disfluency Detection
Abstract: Despite being marginalized by mainstream linguistic research, we claim that disfluency is a core part of dialogue content. We support this claim from a computational modelling perspective, by showing how repairs and edit terms are amenable to modelling by statistical sequence models, in line with current automatic approaches to other linguistic phenomena. From the work on the DUEL project, we present the joint task of incremental disfluency detection and utterance segmentation on dialogue data, and a simple deep learning system which performs it on transcripts and speech recognition results. We show how the constraints of the two tasks interact. Our joint-task system outperforms the equivalent individual task systems, provides competitive results and is suitable for future use in conversation agents in the psychiatric domain.
Most relevant paper: Hough, J., & Schlangen, D. (2017). Joint, Incremental Disfluency Detection and Utterance Segmentation from Speech. In Proceedings of the Annual Meeting of the European Chapter of the Association for Computational Linguistics (EACL).
Silviu Paun, QMUL
Title: Comparing Bayesian Models of Annotation
Abstract: Computational Linguistics practice for crowdsourced data analysis is moving away from the more traditional methods based on majority vote and coefficients of agreement, to the use of models of annotation. But although there has been substantial effort to develop new models, there has been much less work comparing such models on the same datasets. The aim of this paper is to fill this gap. We analyse six of the best known models of annotation, with distinct structures (pooled, unpooled and partially-pooled) and a diverse set of assumptions (annotator abilities, item difficulty or both). We carry out this evaluation using 4 datasets with different degrees of spamming and annotator quality, and provide guidelines for both model selection and implementation.