Picture for Huaizhi Qu

Huaizhi Qu

VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction

Add code
May 26, 2025
Viaarxiv icon

DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation

Add code
May 26, 2025
Viaarxiv icon

Efficient MAP Estimation of LLM Judgment Performance with Prior Transfer

Add code
Apr 17, 2025
Viaarxiv icon

$\texttt{LucidAtlas}$: Learning Uncertainty-Aware, Covariate-Disentangled, Individualized Atlas Representations

Add code
Feb 12, 2025
Viaarxiv icon

Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching

Add code
Oct 23, 2024
Figure 1 for Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching
Figure 2 for Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching
Figure 3 for Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching
Figure 4 for Harnessing Your DRAM and SSD for Sustainable and Accessible LLM Inference with Mixed-Precision and Multi-level Caching
Viaarxiv icon

Omni-Recon: Towards General-Purpose Neural Radiance Fields for Versatile 3D Applications

Add code
Mar 17, 2024
Viaarxiv icon