Tingfeng Lan

Computer Science Research Lover at DS2 Lab, University of Virginia.

profile.jpg

I am a PhD student in computer science at \(Ds^2 Lab\) of the University of Virginia under Prof. Yue Cheng.

Generally, I aim to develop high-performance, scalable systems for emerging ML applications. Currently I am working on building better (computing and storage) systems for (distributed) ML applications. For example, rethinking the potential of multi-tier parallel processing within heterogeneous computing environments like CPU-GPU collaborative computing.

Before joining UVA, I earned my bachelor’s degree from Sichuan University, where I had the privilege of being guided by Prof. Mingjie Tang and Prof. Hui Lu from UTA, focusing on optimizing large-scale recommendation model training systems. I was also fortunate to be advised by Prof. Jianguo Wang from Purdue University, with a focus on utilizing machine learning techniques to optimize graph algorithms’ performance.

I am open to other opportunities and new research, so please feel free to reach me at my email erc8gx _AT_ virginia.edu

news

Sep 09, 2024  🎉🎉 My new homepage is now live!
Jun 20, 2024  🎉🎉 Our work “DLRover-RM: Resource Optimization for Deep Recommendation Models Training in the Cloud” is accepted by VLDB’24!.
Nov 01, 2023  🎉🎉 Checkout our work on “Efficient LLM Model Fine-Tune via Multi-LoRA Optimization”.
Sep 01, 2023  🎉🎉 I received an invitation for a research internship at AI Infra, Ant Group.
Aug 01, 2023  🎉🎉 DLRover’s technical practices in ensuring stability for thousand-card scale large model training on Kubernetes (K8s). Technical Report

selected publications

  1. VLDB’24
    DLRover-RM: Resource Optimization for Deep Recommendation Models Training in the Cloud
    Qinlong* Wang, Tingfeng* Lan, Yinghao Tang, and 8 more authors
    Proceedings of the VLDB Endowment, 2024
  2. preprint
    ASPEN: High-Throughput LoRA Fine-Tuning of Large Language Models with a Single GPU
    Zhengmao Ye, Dengchun Li, Jingqi Tian, and 8 more authors
    arXiv preprint arXiv:2312.02515, 2023