Hi, there. I’m Qi Li, a CS PhD student of Tsinghua University. I also sever as research intern at HKUST-GZ. Previously, I obtained my master’s degree of engineering from Tsinghua University in 2022. Before that, I got my bachelor’s degree in engineering from Lanzhou University.

My current research interests are efficient LLM, machine learning system for LLM, understanding deep learning (espically LLM) from both theoretical and empirical view. Any collaboration is welcomed, feel free to drop me an email.

🔥 News

  • 2025.09:  🎉🎉 One paper is accepted by NeurIPS 2025.
  • 2025.05:  🎉🎉 One paper is accepted by ACL 2025.
  • 2025.04:  🎉🎉 One paper is accepted by ISIT 2025.
  • 2024.09:  🎉🎉 One paper is accepted by NeurIPS 2024.
  • 2024.05:  🎉🎉 One paper is accepted by ACL 2024.

📖 Educations

  • 2025.08 - current, Tsinghua University, PhD Student.
  • 2019.08 - 2022.08, Tsinghua University, Master of Engineering.
  • 2014.09 - 2018.06, Lanzhou University, Bachelor of Engineering.

📝 Selected Publications

  • Li Q. et.al. AdaEdit: Advancing Continuous Knowledge Editing For Large Language Models. Long paper of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025, CCF-A).
  • Qingyue Zhang, Haohao Fu, Guanbo Huang, Yaoyuan Liang, Chang Chu, Tianren Peng, Yanru Wu, Qi Li, Yang Li, Shao-Lun Huang. A High-Dimensional Statistical Method for Optimizing Transfer Quantities in Multi-Source Transfer Learning. Main Track Paper of the Thirty-nith Annual Conference on Neural Information Processing Systems (NeurIPS 2025, CCF-A).
  • T. Peng, Q. Li, S.-L. Huang, “On the Optimal Second-Order Convergence Rate of Minimax Estimation Under Weighted MSE,” IEEE International Symposium on Information Theory, Jun., 2025. (ISIT 2025 Oral, Tsinghua B)
  • Li Q. et.al. Should We Really Edit Language Models? On the Comprehensive Evaluation of Edited Language Models. Main Track Paper of the Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024, CCF-A).
  • Li, Q. et al. Can We Continually Edit Language Models? On the Knowledge Attenuation in Sequential Model Editing. Long paper of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024, CCF-A).
  • Li, Q. et.al. Harnessing the Power of Pre-trained Vision-Language Models for Efficient Medical Report Generation. Long Paper of 32nd ACM International Conference on Information and Knowledge Management (CIKM 2023 Oral, CCF-B).

🎖 Honors and Awards

  • 2015.12, First Class Scholarship for Outstanding Students, Lanzhou University.
  • 2017.12, First Class Scholarship for Outstanding Students, Lanzhou University.
  • 2015.12, Outstanding Student at Lanzhou University.
  • 2016.12, Outstanding Student at Lanzhou University.
  • 2017.12, Outstanding Student at Lanzhou University.
  • 2020.12, Scholarship for Excellent Students, Tsinghua Shenzhen International Graduate School.

👔 Academic Services

  • Reviewer: ACL’25, ICML’25, ICLR’25,26, NeurIPS’24,25, CVPR’26, EMNLP’23,24,25, ICASSP’23,24,25, ECAI’23,24, ICME’24,25