Picture for Xianxuan Long

Xianxuan Long

Demystifying Hybrid Thinking: Can LLMs Truly Switch Between Think and No-Think?

Add code
Oct 14, 2025
Viaarxiv icon

Quantized but Deceptive? A Multi-Dimensional Truthfulness Evaluation of Quantized LLMs

Add code
Aug 26, 2025
Viaarxiv icon

When Truthful Representations Flip Under Deceptive Instructions?

Add code
Jul 29, 2025
Viaarxiv icon

FAEDKV: Infinite-Window Fourier Transform for Unbiased KV Cache Compression

Add code
Jul 26, 2025
Viaarxiv icon

Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models

Add code
Nov 25, 2024
Figure 1 for Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Figure 2 for Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Figure 3 for Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Figure 4 for Dynamic Self-Distillation via Previous Mini-batches for Fine-tuning Small Language Models
Viaarxiv icon