Artificial Intelligence

BlackBox NLP: What are we looking for, and where do we stand?

When:
Thursday, January 30, 2020, 11:00am - 12:00pm PDTiCal
Where:
10th floor conference room: CR#1014
This event is open to the public.
Type:
NL Seminar
Speaker:
Sarah Wiegreffe (Georgia Tech)
Video:
https://www.youtube.com/watch?v=MDohcEYKSbA
Description:

Abstract: The widespread adoption of deep learning in NLP has led to a new state-of-the-art on many tasks. Neural nets are complex systems that are hard to interpret, leaving researchers with little ability to say *why* their model is doing so well. As a consequence, interpretability and explainability hold a new relevance. In this talk, I will present case studies in the subfield of interpretability for NLP, as well as the research goals of the subtopics that fall under this umbrella. I will present a case-study of the necessary conditions for attention modules to be used for explaining classification model predictions, as well as a clinical application of attention mechanisms in physician decision support. I will conclude by discussing future directions, including in natural language explanations for reinforcement learning systems.

Bio: Sarah Wiegreffe is a Computer Science PhD student in the School of Interactive Computing at Georgia Tech. Her research lies at the intersection of machine learning and NLP, with a particular interest in interpretability, explainability, and model robustness. In the past, she has worked in clinical applications of NLP and ML. During her PhD, she has held research internships at Google AI and Sutter Health. She obtained her B.S. in Data Science from the College of Charleston. In her free time, Sarah enjoys rock climbing, traveling, and rock music.

« Return to Events