Kaiyan Zhang (张开颜)
PhD Candidate, Tsinghua University
I am a final-year PhD candidate at the Department of Electronic Engineering, Tsinghua University, under the guidance of Professor Bowen Zhou. I earned B.S. (2020) and M.S. (2022) degrees in Computer Science and Technology from the Harbin Institute of Technology (HIT), where I was supervised by Weinan Zhang and Ting Liu in the HIT-SCIR lab.
My current passion is pushing the boundaries of domain-specific superintelligence (called ExpertAGI), enabling AI systems to achieve expert-level reasoning and collaboration across high-value and practical scenarios. My research directions include:
-
Scalable Learning (e.g., RL): Developing novel frameworks for scalable reinforcement learning, such as TTRL (test-time RL with unlabeled data), SSRL (self-search RL leveraging intrinsic model capabilities), MARTI (multi-agent RL coordination), and OpenPRM (scalable process reward modeling), all aiming to reduce supervision costs and unlock self-improving LLMs.
-
Collaborative Intelligence: Designing mechanisms for model cooperation and synergy, including CRaSh (efficient fine-tuning via clustering and sharing), CoGenesis (secure collaboration between large and small models), FS-Gen (unified laws in collaborative decoding), and MARTI, to empower collective intelligence among agents.
-
Scientific Intelligence: Applying LLMs to scientific discovery, with projects like UltraMedical (generalist biomedical models), hypothesis proposer (autonomous scientific hypothesis generation), and ReviewRL (reinforcement learning for automated scientific review), advancing AI’s role in research and innovation.
Expect to graduate in June 2026. My CV is here.
news
Sep 19, 2025 | TTRL |
---|---|
Sep 11, 2025 | Excited to share our new survey paper on RL for Large Reasoning Models |
Aug 21, 2025 | One paper is accepted to EMNLP 2025 (see ReviewRL). |
Aug 15, 2025 | We investigate agentic search RL without reliance on external search engine while maintaining strong sim2real generalization. (see SSRL |
Jun 26, 2025 | Two papers are accepted to ICCV 2025, congrats to the collaborators. |
May 27, 2025 | We are very excited to release MARTI: A framework for LLM-based Multi-Agent Reinforced Training and Inference. (see MARTI |
May 16, 2025 | Two papers are accepted to ACL 2025 Main, congrats to the collaborators. |
May 14, 2025 | Just shared our latest work on TTS, RL and TTRL at QingkeTalk. |
May 02, 2025 | Four papers are accepted to ICML 2025, congrats to the collaborators. |
Apr 23, 2025 | We release Test-time Reinforcement Learning (TTRL), which investigates Reinforcement Learning (RL) on data without explicit labels for reasoning tasks in LLMs. (see TTRL |
Mar 31, 2025 | We release collections of RL recipes (see Awesome-RL-Reasoning-Recipes |
Mar 24, 2025 | Video-T1 is released, which firstly evaluate TTS on video generation (see Video-T1 |
Feb 10, 2025 | We explore compute-optimal test-time scaling (see compute-optimal-tts |
Jan 23, 2025 | One first-author paper is accepted to ICLR 2025 (see OpenPRM). |
Dec 24, 2024 | One paper is accepted to AAAI 2025 (Congrats to Xinwei). |
Sep 27, 2024 | One first-author paper is accepted to NeurIPS 2024 D&B Track (see UltraMedical |
Sep 20, 2024 | One paper is accepted to EMNLP 2024 (see LPA). |
Jul 10, 2024 | One co-first author paper is accepted to COLM 2024 (see LLM4BioHypoGen). |
May 16, 2024 | Two papers are accepted to ACL 2024 (One first-author, see CoGenesis). |
Mar 13, 2024 | One paper is accepted to NAACL 2024 (see PAD). |
Oct 06, 2023 | One first-author paper is accepted to EMNLP 2023 (see CRaSh). |
selected publications
- GitHub
- ICLR 2025OpenPRM: Building Open-domain Process-based Reward Models with Preference TreesThe Thirteenth International Conference on Learning Representations, 2025
- Arxiv