Picture for Huaizhi Qu

Huaizhi Qu

Multi-Agent Debate for LLM Judges with Adaptive Stability Detection

Add code
Oct 14, 2025
Figure 1 for Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Figure 2 for Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Figure 3 for Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Figure 4 for Multi-Agent Debate for LLM Judges with Adaptive Stability Detection
Viaarxiv icon

DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation

Add code
May 26, 2025
Figure 1 for DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Figure 2 for DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Figure 3 for DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Figure 4 for DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation
Viaarxiv icon

VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction

Add code
May 26, 2025
Viaarxiv icon

Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer

Add code
Apr 17, 2025
Figure 1 for Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer
Figure 2 for Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer
Figure 3 for Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer
Figure 4 for Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer
Viaarxiv icon

$\texttt{LucidAtlas}$: Learning Uncertainty-Aware, Covariate-Disentangled, Individualized Atlas Representations

Add code
Feb 12, 2025
Viaarxiv icon

Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching

Add code
Oct 23, 2024
Figure 1 for Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching
Figure 2 for Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching
Figure 3 for Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching
Figure 4 for Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching
Viaarxiv icon

Omni-Recon: Towards General-Purpose Neural Radiance Fields for Versatile 3D Applications

Add code
Mar 17, 2024
Viaarxiv icon