Abstract: Language models excel at producing fluent text, but in domains like code and math, fluency isn’t enough --- outputs must obey strict syntactic and semantic rules. A new wave of research is rethinking decoding itself: not as a process of sampling words, but as a negotiation between probability, structure, and meaning. In this talk, I’ll explore how grammar and semantics can be embedded into the decoding loop, how we can sample from the true model conditional distribution under constraints, and how programmable abstractions make it possible to enforce properties like type safety or program invariants. The result is a vision of decoding that is faithful to the model yet governed by rules, pointing toward a future where LLMs generate not just plausible text, but reliably correct output.
Bio: Loris D’Antoni is a Jacobs Faculty Scholar and Associate Professor in the Department of Computer Science and Engineering at the University of California San Diego. His research helps people build trustworthy software. His work has introduced new frameworks for verifying and synthesizing programs—ranging from resilient network configurations to robust decision-making systems—and, more recently, methods for aligning language models with user intent.
He is the recipient of an NSF CAREER Award and a Microsoft Research Faculty Fellowship, and was selected as a Vilas Associate at the University of Wisconsin-Madison. He has also received Google, Amazon, and Meta Faculty Awards, and the Morris and Dorothy Rubinoff Dissertation Award. His papers have earned several best paper awards and nominations, including at TACAS, ESOP, ICDCN, and SBES.
Loris received his B.S. and M.S. in Computer Science from the University of Torino, and his Ph.D. in Computer Science from the University of Pennsylvania. Before joining UC San Diego, he was a faculty member at the University of Wisconsin–Madison.
Abstract: Language models excel at producing fluent text, but in domains like code and math, fluency isn’t enough --- outputs must obey strict syntactic and semantic rules. A new wave of research is rethinking decoding itself: not as a process of sampling words, but as a negotiation between probability, structure, and meaning. In this talk, I’ll explore how grammar and semantics can be embedded into the decoding loop, how we can sample from the true model conditional distribution under constraints, and how programmable abstractions make it possible to enforce properties like type safety or program invariants. The result is a vision of decoding that is faithful to the model yet governed by rules, pointing toward a future where LLMs generate not just plausible text, but reliably correct output.
Bio: Loris D’Antoni is a Jacobs Faculty Scholar and Associate Professor in the Department of Computer Science and Engineering at the University of California San Diego. His research helps people build trustworthy software. His work has introduced new frameworks for verifying and synthesizing programs—ranging from resilient network configurations to robust decision-making systems—and, more recently, methods for aligning language models with user intent.
He is the recipient of an NSF CAREER Award and a Microsoft Research Faculty Fellowship, and was selected as a Vilas Associate at the University of Wisconsin-Madison. He has also received Google, Amazon, and Meta Faculty Awards, and the Morris and Dorothy Rubinoff Dissertation Award. His papers have earned several best paper awards and nominations, including at TACAS, ESOP, ICDCN, and SBES.
Loris received his B.S. and M.S. in Computer Science from the University of Torino, and his Ph.D. in Computer Science from the University of Pennsylvania. Before joining UC San Diego, he was a faculty member at the University of Wisconsin–Madison.