Picture for Liang Ding

Liang Ding

Learning from Imperfect Data: Towards Efficient Knowledge Distillation of Autoregressive Language Models for Text-to-SQL

Add code
Oct 15, 2024
Viaarxiv icon

Simultaneous Computation and Memory Efficient Zeroth-Order Optimizer for Fine-Tuning Large Language Models

Add code
Oct 13, 2024
Viaarxiv icon

Self-Powered LLM Modality Expansion for Large Speech-Text Models

Add code
Oct 04, 2024
Viaarxiv icon

MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators

Add code
Sep 22, 2024
Viaarxiv icon

$\mathbb{USCD}$: Improving Code Generation of LLMs by Uncertainty-Aware Selective Contrastive Decoding

Add code
Sep 09, 2024
Viaarxiv icon

Divide, Conquer and Combine: A Training-Free Framework for High-Resolution Image Perception in Multimodal Large Language Models

Add code
Aug 28, 2024
Viaarxiv icon

Aligning Large Language Models from Self-Reference AI Feedback with one General Principle

Add code
Jun 17, 2024
Viaarxiv icon

Revisiting Catastrophic Forgetting in Large Language Model Tuning

Add code
Jun 07, 2024
Viaarxiv icon

Uncertainty Aware Learning for Language Model Alignment

Add code
Jun 07, 2024
Viaarxiv icon

Demystifying the Compression of Mixture-of-Experts Through a Unified Framework

Add code
Jun 04, 2024
Figure 1 for Demystifying the Compression of Mixture-of-Experts Through a Unified Framework
Figure 2 for Demystifying the Compression of Mixture-of-Experts Through a Unified Framework
Figure 3 for Demystifying the Compression of Mixture-of-Experts Through a Unified Framework
Figure 4 for Demystifying the Compression of Mixture-of-Experts Through a Unified Framework
Viaarxiv icon