Picture for Liang Ding

Liang Ding

Kernel Multigrid: Accelerate Back-fitting via Sparse Gaussian Process Regression

Add code
Mar 30, 2024
Viaarxiv icon

Mitigating Hallucinations in Large Vision-Language Models with Instruction Contrastive Decoding

Add code
Mar 27, 2024
Viaarxiv icon

Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction

Add code
Mar 26, 2024
Figure 1 for Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction
Figure 2 for Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction
Figure 3 for Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction
Figure 4 for Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction
Viaarxiv icon

Building Accurate Translation-Tailored LLMs with Language Aware Instruction Tuning

Add code
Mar 21, 2024
Viaarxiv icon

Towards Training A Chinese Large Language Model for Anesthesiology

Add code
Mar 05, 2024
Figure 1 for Towards Training A Chinese Large Language Model for Anesthesiology
Figure 2 for Towards Training A Chinese Large Language Model for Anesthesiology
Figure 3 for Towards Training A Chinese Large Language Model for Anesthesiology
Figure 4 for Towards Training A Chinese Large Language Model for Anesthesiology
Viaarxiv icon

Healthcare Copilot: Eliciting the Power of General LLMs for Medical Consultation

Add code
Feb 20, 2024
Viaarxiv icon

Revisiting Knowledge Distillation for Autoregressive Language Models

Add code
Feb 19, 2024
Figure 1 for Revisiting Knowledge Distillation for Autoregressive Language Models
Figure 2 for Revisiting Knowledge Distillation for Autoregressive Language Models
Figure 3 for Revisiting Knowledge Distillation for Autoregressive Language Models
Figure 4 for Revisiting Knowledge Distillation for Autoregressive Language Models
Viaarxiv icon

DB-LLM: Accurate Dual-Binarization for Efficient LLMs

Add code
Feb 19, 2024
Figure 1 for DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Figure 2 for DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Figure 3 for DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Figure 4 for DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Viaarxiv icon

ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding

Add code
Feb 19, 2024
Figure 1 for ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
Figure 2 for ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
Figure 3 for ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
Figure 4 for ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding
Viaarxiv icon

Mitigating Reward Hacking via Information-Theoretic Reward Modeling

Add code
Feb 16, 2024
Viaarxiv icon