Picture for Briland Hitaj

Briland Hitaj

Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem

Add code
Mar 06, 2024
Figure 1 for Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem
Figure 2 for Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem
Figure 3 for Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem
Figure 4 for Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem
Viaarxiv icon

PassGPT: Password Modeling and (Guided) Generation with Large Language Models

Add code
Jun 14, 2023
Figure 1 for PassGPT: Password Modeling and (Guided) Generation with Large Language Models
Figure 2 for PassGPT: Password Modeling and (Guided) Generation with Large Language Models
Figure 3 for PassGPT: Password Modeling and (Guided) Generation with Large Language Models
Figure 4 for PassGPT: Password Modeling and (Guided) Generation with Large Language Models
Viaarxiv icon

Automatic Measures for Evaluating Generative Design Methods for Architects

Add code
Mar 20, 2023
Figure 1 for Automatic Measures for Evaluating Generative Design Methods for Architects
Figure 2 for Automatic Measures for Evaluating Generative Design Methods for Architects
Figure 3 for Automatic Measures for Evaluating Generative Design Methods for Architects
Figure 4 for Automatic Measures for Evaluating Generative Design Methods for Architects
Viaarxiv icon

Revisiting Variable Ordering for Real Quantifier Elimination using Machine Learning

Add code
Feb 27, 2023
Figure 1 for Revisiting Variable Ordering for Real Quantifier Elimination using Machine Learning
Figure 2 for Revisiting Variable Ordering for Real Quantifier Elimination using Machine Learning
Figure 3 for Revisiting Variable Ordering for Real Quantifier Elimination using Machine Learning
Viaarxiv icon

Trust in Motion: Capturing Trust Ascendancy in Open-Source Projects using Hybrid AI

Add code
Oct 10, 2022
Figure 1 for Trust in Motion: Capturing Trust Ascendancy in Open-Source Projects using Hybrid AI
Figure 2 for Trust in Motion: Capturing Trust Ascendancy in Open-Source Projects using Hybrid AI
Figure 3 for Trust in Motion: Capturing Trust Ascendancy in Open-Source Projects using Hybrid AI
Figure 4 for Trust in Motion: Capturing Trust Ascendancy in Open-Source Projects using Hybrid AI
Viaarxiv icon

Adversarial Scratches: Deployable Attacks to CNN Classifiers

Add code
Apr 20, 2022
Figure 1 for Adversarial Scratches: Deployable Attacks to CNN Classifiers
Figure 2 for Adversarial Scratches: Deployable Attacks to CNN Classifiers
Figure 3 for Adversarial Scratches: Deployable Attacks to CNN Classifiers
Figure 4 for Adversarial Scratches: Deployable Attacks to CNN Classifiers
Viaarxiv icon

TATTOOED: A Robust Deep Neural Network Watermarking Scheme based on Spread-Spectrum Channel Coding

Add code
Feb 22, 2022
Figure 1 for TATTOOED: A Robust Deep Neural Network Watermarking Scheme based on Spread-Spectrum Channel Coding
Figure 2 for TATTOOED: A Robust Deep Neural Network Watermarking Scheme based on Spread-Spectrum Channel Coding
Figure 3 for TATTOOED: A Robust Deep Neural Network Watermarking Scheme based on Spread-Spectrum Channel Coding
Figure 4 for TATTOOED: A Robust Deep Neural Network Watermarking Scheme based on Spread-Spectrum Channel Coding
Viaarxiv icon

FedComm: Federated Learning as a Medium for Covert Communication

Add code
Jan 21, 2022
Figure 1 for FedComm: Federated Learning as a Medium for Covert Communication
Figure 2 for FedComm: Federated Learning as a Medium for Covert Communication
Figure 3 for FedComm: Federated Learning as a Medium for Covert Communication
Figure 4 for FedComm: Federated Learning as a Medium for Covert Communication
Viaarxiv icon

Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks

Add code
Nov 04, 2020
Figure 1 for Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks
Figure 2 for Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks
Figure 3 for Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks
Figure 4 for Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks
Viaarxiv icon

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

Add code
Dec 05, 2019
Figure 1 for Scratch that! An Evolution-based Adversarial Attack against Neural Networks
Figure 2 for Scratch that! An Evolution-based Adversarial Attack against Neural Networks
Figure 3 for Scratch that! An Evolution-based Adversarial Attack against Neural Networks
Figure 4 for Scratch that! An Evolution-based Adversarial Attack against Neural Networks
Viaarxiv icon