Seminars and Events
Modeling American Sign Language via Linguistic Knowledge Infusion
Event Details
Abstract: As language technologies rapidly gain popularity and utility, many of the 70 million deaf and hard-of-hearing people who prefer a sign language are left behind. While NLP research into American Sign Language (ASL) is gaining popularity, we continue to face serious challenges like data scarcity and low engagement with ASL users and experts. This presentation will cover how ASL models strongly benefit from neuro-symbolically learning the linguistic structure of signs, yielding gains with respect to their data efficiency, explainability, and generalizability. Concretely, we show that phonological, morphological, and semantic knowledge “infusion” can increase sign recognition accuracy by 30%, enable few- and zero-shot sign understanding, reduce sensitivity to signer demographics, and address longstanding research questions in sign language phonology and language acquisition.
Speaker Bio
Lee Kezar (he/they) is fifth-year Ph.D. candidate in the USC Viterbi School of Engineering, advised by Jesse Thomason in the Grounding Language in Actions, Multimodal Observations, and Robotics (GLAMOR) Lab. Their research blends computational, linguistic, and psychological models of ASL to increase access to language technologies and advance theoretical perspectives on signing and co-speech gesture. Read more at https://leekezar.github.io