Seminars and Events
How We Achieved Human Parity in CommonsenseQA — Fusing Knowledge into Language Models
Large-scale language models (LM) have achieved great results in many NLP applications. However, there is still a non-negligible gap compared with human’s capability. One of the key reasons is the lack of external knowledge integration. We argue that language models should be equipped with knowledge to better understand world common sense and relations. In this talk, I will introduce how to represent and fuse knowledge into language models, which includes three steps: 1) Ground language into related knowledge, 2) Represent knowledge, and 3) Fuse knowledge representation into language model. We demonstrate our proposed knowledge-boosted LM in the following work: i) achieving human parity in Commonsense Q&A, ii) Dictionary-boosted Language Model, and iii) Knowledge-text Co-pretraining.
Dr. Chenguang Zhu is a Principal Research Manager in Microsoft Cognitive Services Research Group, where he leads the Knowledge & Language Team. His research covers knowledge-enhanced language model, text summarization and few-shot learning. Dr. Zhu has led teams to achieve human parity in CommonsenseQA, HellaSwag and CoQA, and first places in CommonGen, FEVER, ARC and SQuAD v1.0. He holds a Ph.D. degree in Computer Science from Stanford University.
YOU ONLY NEED TO REGISTER ONCE TO ATTEND THE ENTIRE SERIES – We will send you email announcements with details of the upcoming speakers.
Register in advance for this webinar: https://usc.zoom.us/webinar/register/WN__0VhakI6Q6i3JsasdmNWcA
After registering, you will receive an email confirmation containing information about joining the Zoom webinar.
The recording for this AI Seminar talk will be posted on our USC/ISI YouTube page within 1-2 business days: https://www.youtube.com/user/USCISI.
Host: Muhao Chen, POC: Alma Nava