Picture for Bo Li

Bo Li

Beijing Key Laboratory of Digital Media, School of Computer Science and Engineering, Beihang University, Beijing, China

AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration

Add code
Mar 20, 2025
Figure 1 for AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration
Figure 2 for AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration
Figure 3 for AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration
Figure 4 for AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration
Viaarxiv icon

MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models

Add code
Mar 19, 2025
Viaarxiv icon

Reliable and Efficient Amortized Model-based Evaluation

Add code
Mar 17, 2025
Viaarxiv icon

VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search

Add code
Mar 13, 2025
Figure 1 for VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search
Figure 2 for VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search
Figure 3 for VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search
Figure 4 for VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search
Viaarxiv icon

GaussHDR: High Dynamic Range Gaussian Splatting via Learning Unified 3D and 2D Local Tone Mapping

Add code
Mar 13, 2025
Figure 1 for GaussHDR: High Dynamic Range Gaussian Splatting via Learning Unified 3D and 2D Local Tone Mapping
Figure 2 for GaussHDR: High Dynamic Range Gaussian Splatting via Learning Unified 3D and 2D Local Tone Mapping
Figure 3 for GaussHDR: High Dynamic Range Gaussian Splatting via Learning Unified 3D and 2D Local Tone Mapping
Figure 4 for GaussHDR: High Dynamic Range Gaussian Splatting via Learning Unified 3D and 2D Local Tone Mapping
Viaarxiv icon

LLMs Know What to Drop: Self-Attention Guided KV Cache Eviction for Efficient Long-Context Inference

Add code
Mar 11, 2025
Figure 1 for LLMs Know What to Drop: Self-Attention Guided KV Cache Eviction for Efficient Long-Context Inference
Figure 2 for LLMs Know What to Drop: Self-Attention Guided KV Cache Eviction for Efficient Long-Context Inference
Figure 3 for LLMs Know What to Drop: Self-Attention Guided KV Cache Eviction for Efficient Long-Context Inference
Figure 4 for LLMs Know What to Drop: Self-Attention Guided KV Cache Eviction for Efficient Long-Context Inference
Viaarxiv icon

Training Domain Draft Models for Speculative Decoding: Best Practices and Insights

Add code
Mar 10, 2025
Viaarxiv icon

ULTHO: Ultra-Lightweight yet Efficient Hyperparameter Optimization in Deep Reinforcement Learning

Add code
Mar 08, 2025
Viaarxiv icon

EgoLife: Towards Egocentric Life Assistant

Add code
Mar 05, 2025
Viaarxiv icon

Towards Statistical Factuality Guarantee for Large Vision-Language Models

Add code
Feb 27, 2025
Viaarxiv icon