Picture for Naiqiang Tan

Naiqiang Tan

R1-Compress: Long Chain-of-Thought Compression via Chunk Compression and Search

Add code
May 22, 2025
Viaarxiv icon

Not All Thoughts are Generated Equal: Efficient LLM Reasoning via Multi-Turn Reinforcement Learning

Add code
May 17, 2025
Viaarxiv icon

AdaR1: From Long-CoT to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization

Add code
Apr 30, 2025
Viaarxiv icon

Bag of Tricks for Inference-time Computation of LLM Reasoning

Add code
Feb 12, 2025
Viaarxiv icon

Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation

Add code
Jan 30, 2025
Figure 1 for Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation
Figure 2 for Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation
Figure 3 for Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation
Figure 4 for Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation
Viaarxiv icon

Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction

Add code
Jun 14, 2024
Figure 1 for Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction
Figure 2 for Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction
Figure 3 for Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction
Figure 4 for Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction
Viaarxiv icon