Jacqueline He

I am a first-year Ph.D. student in the Natural Language Processing group at the University of Washington. My advisors are Pang Wei Koh and Luke Zettlemoyer. I am fortunate to be supported by the NSF Graduate Research Fellowship.

In 2022, I graduated summa cum laude from Princeton University with a B.S.E. in Computer Science, and minors in Finance and Statistics & Machine Learning. I was affiliated with the Princeton Natural Language Processing group, where my primary advisor was Danqi Chen. In the interim between undergrad and grad school, I worked as a software engineer at Meta.

I grew up in San Jose, California, and was born in Tucson, Arizona.

Email  /  Google Scholar  /  LinkedIn  /  Github  /  Twitter

Research

I am broadly interested in deep learning and natural language processing, specifically the emergent capabilities, applications, and risks of language models.

Thanks to my amazing mentors and collaborators! ☺

* denotes equal contribution

Challenges in Context-Aware Neural Machine Translation
Linghao Jin*, Jacqueline He*, Jonathan May, Xuezhe Ma
EMNLP 2023
abstract / paper / code
MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He, Mengzhou Xia, Christiane Fellbaum, Danqi Chen
EMNLP 2022
abstract / paper / code
Can Rationalization Improve Robustness?
Howard Chen, Jacqueline He, Karthik Narasimhan, Danqi Chen
NAACL 2022
abstract / paper / code

This page was adapted from Jon Barron's template.