Picture for Xianglong Liu

Xianglong Liu

Compromising Embodied Agents with Contextual Backdoor Attacks

Add code
Aug 06, 2024
Figure 1 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 2 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 3 for Compromising Embodied Agents with Contextual Backdoor Attacks
Figure 4 for Compromising Embodied Agents with Contextual Backdoor Attacks
Viaarxiv icon

Temporal Feature Matters: A Framework for Diffusion Model Quantization

Add code
Jul 28, 2024
Viaarxiv icon

QVD: Post-training Quantization for Video Diffusion Models

Add code
Jul 16, 2024
Viaarxiv icon

GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing

Add code
Jun 30, 2024
Viaarxiv icon

Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks

Add code
Jun 10, 2024
Figure 1 for Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks
Figure 2 for Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks
Figure 3 for Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks
Figure 4 for Unveiling the Safety of GPT-4o: An Empirical Study using Jailbreak Attacks
Viaarxiv icon

Jailbreak Vision Language Models via Bi-Modal Adversarial Prompt

Add code
Jun 06, 2024
Viaarxiv icon

LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions

Add code
Jun 04, 2024
Figure 1 for LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
Figure 2 for LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
Figure 3 for LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
Figure 4 for LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions
Viaarxiv icon

SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models

Add code
May 23, 2024
Figure 1 for SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models
Figure 2 for SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models
Figure 3 for SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models
Figure 4 for SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models
Viaarxiv icon

Selective Focus: Investigating Semantics Sensitivity in Post-training Quantization for Lane Detection

Add code
May 10, 2024
Viaarxiv icon

Towards Robust Physical-world Backdoor Attacks on Lane Detection

Add code
May 09, 2024
Figure 1 for Towards Robust Physical-world Backdoor Attacks on Lane Detection
Figure 2 for Towards Robust Physical-world Backdoor Attacks on Lane Detection
Figure 3 for Towards Robust Physical-world Backdoor Attacks on Lane Detection
Figure 4 for Towards Robust Physical-world Backdoor Attacks on Lane Detection
Viaarxiv icon