Picture for Lijun Li

Lijun Li

SafeWork-R1: Coevolving Safety and Intelligence under the AI-45$^{\circ}$ Law

Add code
Jul 24, 2025
Viaarxiv icon

Visual Contextual Attack: Jailbreaking MLLMs with Image-Driven Context Injection

Add code
Jul 03, 2025
Viaarxiv icon

Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning

Add code
Apr 21, 2025
Viaarxiv icon

Iterative Value Function Optimization for Guided Decoding

Add code
Mar 05, 2025
Viaarxiv icon

Rethinking Bottlenecks in Safety Fine-Tuning of Vision Language Models

Add code
Jan 30, 2025
Viaarxiv icon

WorldSimBench: Towards Video Generation Models as World Simulators

Add code
Oct 23, 2024
Figure 1 for WorldSimBench: Towards Video Generation Models as World Simulators
Figure 2 for WorldSimBench: Towards Video Generation Models as World Simulators
Figure 3 for WorldSimBench: Towards Video Generation Models as World Simulators
Figure 4 for WorldSimBench: Towards Video Generation Models as World Simulators
Viaarxiv icon

A Spatiotemporal Hand-Eye Calibration for Trajectory Alignment in Visual(-Inertial) Odometry Evaluation

Add code
Apr 23, 2024
Figure 1 for A Spatiotemporal Hand-Eye Calibration for Trajectory Alignment in Visual(-Inertial) Odometry Evaluation
Figure 2 for A Spatiotemporal Hand-Eye Calibration for Trajectory Alignment in Visual(-Inertial) Odometry Evaluation
Figure 3 for A Spatiotemporal Hand-Eye Calibration for Trajectory Alignment in Visual(-Inertial) Odometry Evaluation
Figure 4 for A Spatiotemporal Hand-Eye Calibration for Trajectory Alignment in Visual(-Inertial) Odometry Evaluation
Viaarxiv icon

Assessment of Multimodal Large Language Models in Alignment with Human Values

Add code
Mar 26, 2024
Figure 1 for Assessment of Multimodal Large Language Models in Alignment with Human Values
Figure 2 for Assessment of Multimodal Large Language Models in Alignment with Human Values
Figure 3 for Assessment of Multimodal Large Language Models in Alignment with Human Values
Figure 4 for Assessment of Multimodal Large Language Models in Alignment with Human Values
Viaarxiv icon

EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models

Add code
Mar 18, 2024
Viaarxiv icon

SALAD-Bench: A Hierarchical and Comprehensive Safety Benchmark for Large Language Models

Add code
Feb 08, 2024
Viaarxiv icon