Picture for Xiaolin Hu

Xiaolin Hu

Department of Computer Science and Technology, Tsinghua University, Beijing, China

A Fast and Lightweight Model for Causal Audio-Visual Speech Separation

Add code
Jun 07, 2025
Viaarxiv icon

Surrogate Signals from Format and Length: Reinforcement Learning for Solving Mathematical Problems without Ground Truth Answers

Add code
May 26, 2025
Viaarxiv icon

AudioTrust: Benchmarking the Multifaceted Trustworthiness of Audio Large Language Models

Add code
May 22, 2025
Viaarxiv icon

Time-Frequency-Based Attention Cache Memory Model for Real-Time Speech Separation

Add code
May 19, 2025
Viaarxiv icon

GlyphMastero: A Glyph Encoder for High-Fidelity Scene Text Editing

Add code
May 08, 2025
Viaarxiv icon

LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models

Add code
Mar 27, 2025
Viaarxiv icon

Towards Auto-Regressive Next-Token Prediction: In-Context Learning Emerges from Generalization

Add code
Feb 24, 2025
Viaarxiv icon

ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning

Add code
Jan 06, 2025
Figure 1 for ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Figure 2 for ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Figure 3 for ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Figure 4 for ADePT: Adaptive Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Viaarxiv icon

DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models

Add code
Dec 30, 2024
Figure 1 for DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models
Figure 2 for DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models
Figure 3 for DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models
Figure 4 for DoTA: Weight-Decomposed Tensor Adaptation for Large Language Models
Viaarxiv icon

Faster-GCG: Efficient Discrete Optimization Jailbreak Attacks against Aligned Large Language Models

Add code
Oct 20, 2024
Viaarxiv icon