I want to build AI systems with coherent, updateable, and interpretable models of internal and external phenomenon. Specifically, my research focuses on the following three types of models:

  • World models: models of the external environment that update in the presence of new information and support coherent downstream prediction and reasoning.
  • User models: models of the user's preferences, goals, beliefs, values, learning styles, and workflows.
  • Self models: models of the AI system's own internal computations, external behaviors, and limitations.

Together, these models enable AI systems to behave more reliably and predictably, in ways that are transparent and safe for humans. Ultimately, my goal is to pave the way for AI systems that we can collaborate with and learn from—systems that empower rather than replace people.

About Me

I am a PhD candidate at MIT CSAIL, affiliated with the language & intelligence (LINGO) lab @ MIT. My advisor is Jacob Andreas. I am funded by a Clare Boothe Luce Graduate Fellowship [Press] and was a 2024 Rising Star in EECS. Previously, I spent a year at Facebook AI Applied Research, and before that, I obtained my B.S. in Computer Science at the University of Washington, where I worked with Luke Zettlemoyer. You can view more in my CV.

Representative Papers

All Papers

This list is updated very intermittently. For the latest, up-to-date list, please check my Google Scholar.