Universal Linguistic Inductive Biases Via Meta-Learning

Thursday, April 15, 2021, 11:00 am - 12:00 pm PDTiCal
VIRTUAL: https://usc.zoom.us/j/99182081995This event is open to the public.
NL Seminar
R. Thomas Mc Coy
Video Recording:

Reminder: ​Meeting hosts only admit guests that they know to the Zoom meeting. Hence, you're highly encouraged to use your USC account to sign into Zoom. If you're an outside visitor, please inform Mozhdeh Gheini (gheini at isi dot edu) beforehand so we'll be aware of your attendance and let you in.

Abstract: Despite their impressive scores on NLP leaderboards, current neural models fall short of humans in two major ways: They require massive amounts of training data, and they generalize poorly to novel types of examples. To address these problems, we propose an approach for giving targeted linguistic inductive biases to a model, where inductive biases are factors that affect how a learner generalizes. Our approach imparts inductive biases using meta-learning, a procedure through which the model discovers how to acquire new languages more quickly via exposure to many possible languages. By controlling the properties of the languages used during meta-learning, we can control the inductive biases that meta-learning imparts. Using a case study from phonology, we show how this approach enables faster learning and more robust generalization.

Tom McCoy is a PhD student in the Johns Hopkins Cognitive Science department, advised by Tal Linzen and Paul Smolensky. He studies the linguistic abilities of neural networks, focusing on inductive biases (the topic of this talk) as well as compositional structure: How can neural networks use their continuous vector representations to encode phrases and sentences?

« Return to Upcoming Events