Picture for Mingxuan Yuan

Mingxuan Yuan

ELF: Efficient Logic Synthesis by Pruning Redundancy in Refactoring

Add code
Aug 11, 2025
Viaarxiv icon

Discovering Interpretable Programmatic Policies via Multimodal LLM-assisted Evolutionary Search

Add code
Aug 07, 2025
Viaarxiv icon

PreMoe: Lightening MoEs on Constrained Memory by Expert Pruning and Retrieval

Add code
May 23, 2025
Viaarxiv icon

TrimR: Verifier-based Training-Free Thinking Compression for Efficient Test-Time Scaling

Add code
May 22, 2025
Viaarxiv icon

Harnessing On-Device Large Language Model: Empirical Results and Implications for AI PC

Add code
May 22, 2025
Viaarxiv icon

Harnessing Large Language Models Locally: Empirical Results and Implications for AI PC

Add code
May 21, 2025
Viaarxiv icon

HyperTree Planning: Enhancing LLM Reasoning via Hierarchical Thinking

Add code
May 05, 2025
Viaarxiv icon

Accelerating Large Language Model Reasoning via Speculative Search

Add code
May 03, 2025
Viaarxiv icon

Fitness Landscape of Large Language Model-Assisted Automated Algorithm Search

Add code
May 01, 2025
Viaarxiv icon

Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models

Add code
Mar 29, 2025
Figure 1 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 2 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 3 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Figure 4 for Beyond Standard MoE: Mixture of Latent Experts for Resource-Efficient Language Models
Viaarxiv icon