Picture for Bo Li

Bo Li

Beijing Key Laboratory of Digital Media, School of Computer Science and Engineering, Beihang University, Beijing, China

EAIRA: Establishing a Methodology for Evaluating AI Models as Scientific Research Assistants

Add code
Feb 27, 2025
Viaarxiv icon

PhenoProfiler: Advancing Phenotypic Learning for Image-based Drug Discovery

Add code
Feb 26, 2025
Viaarxiv icon

The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?

Add code
Feb 24, 2025
Viaarxiv icon

IPAD: Inverse Prompt for AI Detection -- A Robust and Explainable LLM-Generated Text Detector

Add code
Feb 21, 2025
Viaarxiv icon

Step-Audio: Unified Understanding and Generation in Intelligent Speech Interaction

Add code
Feb 18, 2025
Viaarxiv icon

SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities

Add code
Feb 17, 2025
Figure 1 for SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities
Figure 2 for SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities
Figure 3 for SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities
Figure 4 for SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities
Viaarxiv icon

SphereFusion: Efficient Panorama Depth Estimation via Gated Fusion

Add code
Feb 09, 2025
Viaarxiv icon

DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails

Add code
Feb 07, 2025
Figure 1 for DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails
Figure 2 for DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails
Figure 3 for DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails
Figure 4 for DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails
Viaarxiv icon

Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing

Add code
Feb 06, 2025
Viaarxiv icon

Can LLMs Maintain Fundamental Abilities under KV Cache Compression?

Add code
Feb 04, 2025
Figure 1 for Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Figure 2 for Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Figure 3 for Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Figure 4 for Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
Viaarxiv icon