Breadcrumb

COLLOQUIUM - Saadia Gabriel: "Human in the Machine: Towards Community-grounded AI Reasoning"

Add to Calendar 03/06/2026 11:00 03/06/2026 11:50 America/Los_Angeles COLLOQUIUM - Saadia Gabriel: "Human in the Machine: Towards Community-grounded AI Reasoning"

Abstract: Large language models (LLMs) like ChatGPT are increasingly determining the course of our everyday lives. They can decide what content we are likely to see on social media. They are already being deployed in high-stakes settings, e.g. as mental health support tools. So what happens when LLMs break? What are the risks posed from LLM behavior that is misaligned from human expectations and norms? How do we mitigate the negative effects of these failures on society and prevent discriminatory decision-making?
In this talk, I discuss the growing disconnect between scalability and safety in LLMs. I walk through three recent studies that highlight the need for a community-grounded approach that bridges the gap between AI systems and the users who interact with them. First, I will describe work from the UCLA Misinformation, AI & Responsible Society (MARS) lab exploring how AI agents can change the beliefs of cognitively biased users. Next, I will discuss limitations in replicating human evaluation with AI agents. Lastly, I discuss a multi-objective preference learning framework for ensuring LLM-based mental health chatbots are aligned with both clinical guidance and patient perspectives.

Bio:Saadia Gabriel is an Assistant Professor of Computer Science at UCLA. Her work has received several best paper nominations or awards, and has
been covered by a wide range of media outlets like Forbes and TechCrunch. She was named on Forbes' 30 under 30 2024 list and received a 2025
Google Research Scholar award. She previously was a NYU Data Science Faculty Fellow and MIT CSAIL Postdoctoral Fellow. She received her PhD
from the University of Washington.

-
Student Success Center (SSC)329

Abstract: Large language models (LLMs) like ChatGPT are increasingly determining the course of our everyday lives. They can decide what content we are likely to see on social media. They are already being deployed in high-stakes settings, e.g. as mental health support tools. So what happens when LLMs break? What are the risks posed from LLM behavior that is misaligned from human expectations and norms? How do we mitigate the negative effects of these failures on society and prevent discriminatory decision-making?
In this talk, I discuss the growing disconnect between scalability and safety in LLMs. I walk through three recent studies that highlight the need for a community-grounded approach that bridges the gap between AI systems and the users who interact with them. First, I will describe work from the UCLA Misinformation, AI & Responsible Society (MARS) lab exploring how AI agents can change the beliefs of cognitively biased users. Next, I will discuss limitations in replicating human evaluation with AI agents. Lastly, I discuss a multi-objective preference learning framework for ensuring LLM-based mental health chatbots are aligned with both clinical guidance and patient perspectives.

Bio:Saadia Gabriel is an Assistant Professor of Computer Science at UCLA. Her work has received several best paper nominations or awards, and has
been covered by a wide range of media outlets like Forbes and TechCrunch. She was named on Forbes' 30 under 30 2024 list and received a 2025
Google Research Scholar award. She previously was a NYU Data Science Faculty Fellow and MIT CSAIL Postdoctoral Fellow. She received her PhD
from the University of Washington.

Type
Colloquium
Target Audience
Students
Admission
Free
Registration Required
No
Let us help you with your search