Alert button
Picture for Earlence Fernandes

Earlence Fernandes

Alert button

Misusing Tools in Large Language Models With Visual Adversarial Examples

Add code
Bookmark button
Alert button
Oct 04, 2023
Xiaohan Fu, Zihan Wang, Shuheng Li, Rajesh K. Gupta, Niloofar Mireshghallah, Taylor Berg-Kirkpatrick, Earlence Fernandes

Figure 1 for Misusing Tools in Large Language Models With Visual Adversarial Examples
Figure 2 for Misusing Tools in Large Language Models With Visual Adversarial Examples
Figure 3 for Misusing Tools in Large Language Models With Visual Adversarial Examples
Figure 4 for Misusing Tools in Large Language Models With Visual Adversarial Examples
Viaarxiv icon

SkillFence: A Systems Approach to Practically Mitigating Voice-Based Confusion Attacks

Add code
Bookmark button
Alert button
Dec 16, 2022
Ashish Hooda, Matthew Wallace, Kushal Jhunjhunwalla, Earlence Fernandes, Kassem Fawaz

Figure 1 for SkillFence: A Systems Approach to Practically Mitigating Voice-Based Confusion Attacks
Figure 2 for SkillFence: A Systems Approach to Practically Mitigating Voice-Based Confusion Attacks
Figure 3 for SkillFence: A Systems Approach to Practically Mitigating Voice-Based Confusion Attacks
Figure 4 for SkillFence: A Systems Approach to Practically Mitigating Voice-Based Confusion Attacks
Viaarxiv icon

Re-purposing Perceptual Hashing based Client Side Scanning for Physical Surveillance

Add code
Bookmark button
Alert button
Dec 08, 2022
Ashish Hooda, Andrey Labunets, Tadayoshi Kohno, Earlence Fernandes

Figure 1 for Re-purposing Perceptual Hashing based Client Side Scanning for Physical Surveillance
Figure 2 for Re-purposing Perceptual Hashing based Client Side Scanning for Physical Surveillance
Figure 3 for Re-purposing Perceptual Hashing based Client Side Scanning for Physical Surveillance
Figure 4 for Re-purposing Perceptual Hashing based Client Side Scanning for Physical Surveillance
Viaarxiv icon

Exploring Adversarial Robustness of Deep Metric Learning

Add code
Bookmark button
Alert button
Feb 14, 2021
Thomas Kobber Panum, Zi Wang, Pengyu Kan, Earlence Fernandes, Somesh Jha

Figure 1 for Exploring Adversarial Robustness of Deep Metric Learning
Figure 2 for Exploring Adversarial Robustness of Deep Metric Learning
Figure 3 for Exploring Adversarial Robustness of Deep Metric Learning
Figure 4 for Exploring Adversarial Robustness of Deep Metric Learning
Viaarxiv icon

Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems

Add code
Bookmark button
Alert button
Dec 16, 2020
Yuzhe Ma, Jon Sharp, Ruizhe Wang, Earlence Fernandes, Xiaojin Zhu

Figure 1 for Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Figure 2 for Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Figure 3 for Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Figure 4 for Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Viaarxiv icon

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Add code
Bookmark button
Alert button
Nov 30, 2020
Athena Sayles, Ashish Hooda, Mohit Gupta, Rahul Chatterjee, Earlence Fernandes

Figure 1 for Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Figure 2 for Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Figure 3 for Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Figure 4 for Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Viaarxiv icon

Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification

Add code
Bookmark button
Alert button
Feb 17, 2020
Ryan Feng, Jiefeng Chen, Nelson Manohar, Earlence Fernandes, Somesh Jha, Atul Prakash

Figure 1 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 2 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 3 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 4 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Viaarxiv icon

Analyzing the Interpretability Robustness of Self-Explaining Models

Add code
Bookmark button
Alert button
May 27, 2019
Haizhong Zheng, Earlence Fernandes, Atul Prakash

Figure 1 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 2 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 3 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 4 for Analyzing the Interpretability Robustness of Self-Explaining Models
Viaarxiv icon

Physical Adversarial Examples for Object Detectors

Add code
Bookmark button
Alert button
Oct 05, 2018
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song

Figure 1 for Physical Adversarial Examples for Object Detectors
Figure 2 for Physical Adversarial Examples for Object Detectors
Figure 3 for Physical Adversarial Examples for Object Detectors
Figure 4 for Physical Adversarial Examples for Object Detectors
Viaarxiv icon

Note on Attacking Object Detectors with Adversarial Stickers

Add code
Bookmark button
Alert button
Jul 23, 2018
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer

Figure 1 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 2 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 3 for Note on Attacking Object Detectors with Adversarial Stickers
Viaarxiv icon