Seminars and Events
Domain-Adaptive Programming: Expanding the Boundaries of What LLMs Can Solve
Event Details
Abstract: Large Language Models (LLMs) exhibit impressive general capabilities, yet they remain brittle in specialized, high-stakes domains, where subtle reasoning errors, poor grounding, and unverified plans can lead to serious failures. In this talk, I introduce Domain-Adaptive Programming, a paradigm that optimizes LLMs to generate or leverage formal, symbolic, and programmatic structures, to solve domain-specific problems. By grounding neural reasoning in formal structures, Domain-Adaptive Programming combines the flexibility of LLMs with the robustness and verifiability of symbolic systems. I will present a series of methods that operationalize this paradigm across text, vision-language, and embodied domains, including structured program generation, latent optimization for multi-step reasoning and planning, and iterative feedback loops with symbolic verification.
Meeting ID: 925 5859 4126
Passcode: 2025
Host: Dhananjay Ashok
POC: Maura Covaci
Speaker Bio
Wang Bill Zhu is a final-year Ph.D. candidate in Computer Science at the USC, advised by Jesse Thomason and Robin Jia. His research lies at the intersection of natural language processing, machine learning, and vision-language reasoning, with a focus on neuro-symbolic methods. He has previously conducted research at Meta Reality Labs and Google DeepMind.
Bill has published extensively at top-tier venues including NeurIPS, ICLR, NAACL, EMNLP, CVPR, and ACL. He actively serves as a reviewer and area chair across NLP, machine learning, computer vision, and robotics venues. He is a recipient of the NSF ACCESS Computing Grant.