Seminars and Events

ISI Natural Language Seminar

Universal Linguistic Inductive Biases Via Meta-Learning

Event Details

Despite their impressive scores on NLP leaderboards, current neural models fall short of humans in two major ways: They require massive amounts of training data, and they generalize poorly to novel types of examples. To address these problems, we propose an approach for giving targeted linguistic inductive biases to a model, where inductive biases are factors that affect how a learner generalizes. Our approach imparts inductive biases using meta-learning, a procedure through which the model discovers how to acquire new languages more quickly via exposure to many possible languages. By controlling the properties of the languages used during meta-learning, we can control the inductive biases that meta-learning imparts. Using a case study from phonology, we show how this approach enables faster learning and more robust generalization.

Speaker Bio

Tom McCoy is a PhD student in the Johns Hopkins Cognitive Science department, advised by Tal Linzen and Paul Smolensky. He studies the linguistic abilities of neural networks, focusing on inductive biases (the topic of this talk) as well as compositional structure: How can neural networks use their continuous vector representations to encode phrases and sentences?