Hi! I am a third-year PhD student at MIT CSAIL, affiliated with the language & intelligence (LINGO) lab @ MIT. My advisor is Jacob Andreas. I have very broad interests within natural language processing, but some high-level themes of recent work include:
- Probing for representations of meaning in language models
- Using language models for various natural-language tasks and grounded tasks. (What characterizes the space of tasks humans may be interested in? How do we transfer linguistic knowledge to grounded tasks? How do we best specify human intent to a language model?)
- Improving the coherence and faithfulness of machine generations
I am funded by an NDSEG Fellowship and Clare Boothe Luce Graduate Fellowship [Press]. Previously, I spent a year at Facebook AI Applied Research, and before that, I obtained my B.S. in Computer Science at the University of Washington, where I worked with Luke Zettlemoyer. You can view more in my CV.
Recent Papers
This list is updated very intermittently. For the latest, up-to-date list, please check my Google Scholar.
-
Measuring and Manipulating Knowledge Representations in Language Models
Evan Hernandez, Belinda Z. Li, Jacob Andreas
ArXiv Preprint -
LaMPP: Language Models as Probabilistic Priors for Perception and Action
Belinda Z. Li, William Chen, Pratyusha Sharma, Jacob Andreas
ArXiv Preprint
-
Language Modeling with Latent Situations
Belinda Z. Li, Maxwell Nye, Jacob Andreas
ArXiv Preprint -
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks
Belinda Z. Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, Jacob Andreas
NAACL, 2022
- Implicit Representations of Meaning in Neural Language Models
Belinda Z. Li, Maxwell Nye, Jacob Andreas
ACL, 2021.
[expand papers]
-
On Unifying Misinformation Detection
Nayeon Lee, Belinda Z. Li, Sinong Wang, Pascale Fung, Hao Ma, Wen-tau Yih, and Madian Khabsa
NAACL, 2021 -
On the Influence of Masking Policies in Intermediate Pre-training
Qinyuan Ye, Belinda Z. Li, Sinong Wang, Benjamin Bolte, Hao Ma, Wen-tau Yih, Xiang Ren, and Madian Khabsa
EMNLP, 2021
-
Studying Strategically: Learning to Mask for Closed-book QA
Qinyuan Ye, Belinda Z. Li, Sinong Wang, Benjamin Bolte, Hao Ma, Wen-tau Yih, Xiang Ren, and Madian Khabsa
ArXiv Preprint -
Efficient One-Pass End-to-End Entity Linking for Questions
Belinda Z. Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad, and Wen-tau Yih
EMNLP, 2020 -
Linformer: Self-Attention with Linear Complexity
Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma
ArXiv Preprint -
Language Models as Fact Checkers?
Nayeon Lee, Belinda Z. Li, Sinong Wang, Wen-tau Yih, Hao Ma, and Madian Khabsa
FEVER (Fact Extraction and VERification) Workshop @ ACL, 2020 -
Active Learning for Coreference Resolution using Discrete Annotation
Belinda Z. Li, Gabriel Stanovsky, and Luke Zettlemoyer
ACL, 2020
[/expand papers]
Misc
Outside of work…
- I organize with the MIT Graduate Student Union. We’re fighting for a contract with decent wages, benefits, real recourse, equity for international workers, and more. Please join us!
- I also dance ballet and have recently started (indoor) bouldering.
Fun facts:
- Noam Chomsky is my great(x3)-grand-advisor.
- Under very generous definitions, my Erdős–Bacon number is 7. I eagerly welcome any collaborators – academic or entertainment – willing to lower that number.