Picture for Zhilin Yang

Zhilin Yang

Kimi Linear: An Expressive, Efficient Attention Architecture

Add code
Oct 30, 2025
Viaarxiv icon

OpenCUA: Open Foundations for Computer-Use Agents

Add code
Aug 12, 2025
Viaarxiv icon

Kimi K2: Open Agentic Intelligence

Add code
Jul 28, 2025
Viaarxiv icon

Learning to Plan Before Answering: Self-Teaching LLMs to Learn Abstract Plans for Problem Solving

Add code
Apr 28, 2025
Viaarxiv icon

Kimi-Audio Technical Report

Add code
Apr 25, 2025
Viaarxiv icon

Kimina-Prover Preview: Towards Large Formal Reasoning Models with Reinforcement Learning

Add code
Apr 15, 2025
Viaarxiv icon

Kimi-VL Technical Report

Add code
Apr 10, 2025
Figure 1 for Kimi-VL Technical Report
Figure 2 for Kimi-VL Technical Report
Figure 3 for Kimi-VL Technical Report
Figure 4 for Kimi-VL Technical Report
Viaarxiv icon

Muon is Scalable for LLM Training

Add code
Feb 24, 2025
Viaarxiv icon

MoBA: Mixture of Block Attention for Long-Context LLMs

Add code
Feb 18, 2025
Figure 1 for MoBA: Mixture of Block Attention for Long-Context LLMs
Figure 2 for MoBA: Mixture of Block Attention for Long-Context LLMs
Figure 3 for MoBA: Mixture of Block Attention for Long-Context LLMs
Figure 4 for MoBA: Mixture of Block Attention for Long-Context LLMs
Viaarxiv icon

CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X

Add code
Mar 30, 2023
Figure 1 for CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X
Figure 2 for CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X
Figure 3 for CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X
Figure 4 for CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X
Viaarxiv icon