Zhenting Qi
漆振霆 | [ch'i chen t'ing]

Welcome! I am an incoming Computer Science Ph.D. student at Harvard University, where I am honored to be co-advised by Prof. Yilun Du and Prof. Hima Lakkaraju. My research focuses on advancing Foundation Models
and Generative AI
, with the ultimate vision of developing intelligent and reliable AI systems that benefit human society. Motivated by this, I am generally interested in the following topics (w/o particular order):
- Reasoning
- Understanding and enhancing reasoning capabilities in foundation models
- Developing AI systems that generalize effectively to OOD scenarios
- Training (multi-)agents for compositional reasoning tasks
- Reliability
- Improving understanding of foundation models and AI systems
- Enhancing AI controllability and robustness
- Designing scalable methods to ensure reliability while advancing capabilities
More specifically, I am currently investigating several exciting directions:
- 🤖 Training agents for coding assistance and scientific discovery
- 🧠 Developing advanced memory mechanisms for agents
- 🔄 Training-time and test-time self-evolution
- 📊 Dynamic evaluation for reasoning/generalization of foundation models and agents
Our research group actively welcomes collaborations, and I am always excited to chat about research ideas! Please feel free to reach out at: zhentingqi [at] g [dot] harvard [dot] edu
For more information about my research, please see Google Scholar, Semantic Scholar, or DBLP.
About Me
I hold a master’s degree in Computational Science and Engineering from Harvard, and dual bachelor’s degrees in Computer Engineering from UIUC and ZJU (highest honors). I am also a recipient of Harvard SEAS Prize Fellowship.
I’ve had the privilege of working closely with many distinguished researchers, including (the late) Prof. Dragomir R. Radev at Yale, Prof. Volodymyr Kindratenko at UIUC, Dr. Li Lyna Zhang at Microsoft Research Asia, Prof. Chuang Gan at MIT-IBM Watson AI Lab, and Prof. James Glass at MIT.
News
May 30, 2025 | Our paper Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search 【新智元】 has been accepted to |
---|---|
May 01, 2025 | Will be joining Google DeepMind (Mountain View office) as a Student Researcher! |
Apr 15, 2025 | I will continue my research journey at Harvard as a PhD student! |
Jan 30, 2025 | Our papers Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers 【机器之心】, Quantifying Generalization Complexity for Large Language Models, Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems have been accepted to |