Picture for Guangjing Wang

Guangjing Wang

University of South Florida

Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving

Add code
Oct 31, 2024
Figure 1 for Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving
Figure 2 for Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving
Figure 3 for Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving
Figure 4 for Optical Lens Attack on Monocular Depth Estimation for Autonomous Driving
Viaarxiv icon

Optical Lens Attack on Deep Learning Based Monocular Depth Estimation

Add code
Sep 25, 2024
Figure 1 for Optical Lens Attack on Deep Learning Based Monocular Depth Estimation
Figure 2 for Optical Lens Attack on Deep Learning Based Monocular Depth Estimation
Figure 3 for Optical Lens Attack on Deep Learning Based Monocular Depth Estimation
Figure 4 for Optical Lens Attack on Deep Learning Based Monocular Depth Estimation
Viaarxiv icon

Protecting Activity Sensing Data Privacy Using Hierarchical Information Dissociation

Add code
Sep 04, 2024
Figure 1 for Protecting Activity Sensing Data Privacy Using Hierarchical Information Dissociation
Figure 2 for Protecting Activity Sensing Data Privacy Using Hierarchical Information Dissociation
Figure 3 for Protecting Activity Sensing Data Privacy Using Hierarchical Information Dissociation
Figure 4 for Protecting Activity Sensing Data Privacy Using Hierarchical Information Dissociation
Viaarxiv icon

The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs

Add code
Sep 01, 2024
Figure 1 for The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs
Figure 2 for The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs
Figure 3 for The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs
Figure 4 for The Dark Side of Human Feedback: Poisoning Large Language Models via User Inputs
Viaarxiv icon

Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems

Add code
Nov 20, 2023
Figure 1 for Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Figure 2 for Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Figure 3 for Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Figure 4 for Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Viaarxiv icon

PhantomSound: Black-Box, Query-Efficient Audio Adversarial Attack via Split-Second Phoneme Injection

Add code
Sep 13, 2023
Viaarxiv icon

Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots

Add code
Jul 14, 2023
Viaarxiv icon

VSMask: Defending Against Voice Synthesis Attack via Real-Time Predictive Perturbation

Add code
May 09, 2023
Figure 1 for VSMask: Defending Against Voice Synthesis Attack via Real-Time Predictive Perturbation
Figure 2 for VSMask: Defending Against Voice Synthesis Attack via Real-Time Predictive Perturbation
Figure 3 for VSMask: Defending Against Voice Synthesis Attack via Real-Time Predictive Perturbation
Figure 4 for VSMask: Defending Against Voice Synthesis Attack via Real-Time Predictive Perturbation
Viaarxiv icon

A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT

Add code
Feb 18, 2023
Viaarxiv icon

TALCS: An Open-Source Mandarin-English Code-Switching Corpus and a Speech Recognition Baseline

Add code
Jun 27, 2022
Figure 1 for TALCS: An Open-Source Mandarin-English Code-Switching Corpus and a Speech Recognition Baseline
Figure 2 for TALCS: An Open-Source Mandarin-English Code-Switching Corpus and a Speech Recognition Baseline
Figure 3 for TALCS: An Open-Source Mandarin-English Code-Switching Corpus and a Speech Recognition Baseline
Figure 4 for TALCS: An Open-Source Mandarin-English Code-Switching Corpus and a Speech Recognition Baseline
Viaarxiv icon