OutSafe-Bench: A Benchmark for Multimodal Offensive Content Detection in Large Language Models

Add code
Nov 13, 2025
Figure 1 for OutSafe-Bench: A Benchmark for Multimodal Offensive Content Detection in Large Language Models
Figure 2 for OutSafe-Bench: A Benchmark for Multimodal Offensive Content Detection in Large Language Models
Figure 3 for OutSafe-Bench: A Benchmark for Multimodal Offensive Content Detection in Large Language Models
Figure 4 for OutSafe-Bench: A Benchmark for Multimodal Offensive Content Detection in Large Language Models

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: