Publications
Improving Language Models Through Context
Abstract
Contextual cues play an important role in enhancing the reasoning capabilities and adaptability of language models (LMs) when faced with complex tasks. Effective integration of context can make LMs interpret human requests more accurately and generate more precise responses. This thesis investigates strategic integration and dynamic utilization of diverse forms of context (e. g., explanations as context, illustrative task examples, dialogue history, data-driven context, and model-generated context) to systematically improve the performance of LMs. This thesis addresses three core questions:
- Date
- November 30, 2025
- Authors
- Dong-Ho Lee
- Institution
- University of Southern California