Alert button
Picture for Mohammad Javad Shafiee

Mohammad Javad Shafiee

Alert button

OutlierNets: Highly Compact Deep Autoencoder Network Architectures for On-Device Acoustic Anomaly Detection

Add code
Bookmark button
Alert button
Mar 31, 2021
Saad Abbasi, Mahmoud Famouri, Mohammad Javad Shafiee, Alexander Wong

Figure 1 for OutlierNets: Highly Compact Deep Autoencoder Network Architectures for On-Device Acoustic Anomaly Detection
Figure 2 for OutlierNets: Highly Compact Deep Autoencoder Network Architectures for On-Device Acoustic Anomaly Detection
Viaarxiv icon

A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning

Add code
Bookmark button
Alert button
Dec 25, 2020
Ahmadreza Jeddi, Mohammad Javad Shafiee, Alexander Wong

Figure 1 for A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning
Figure 2 for A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning
Figure 3 for A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning
Figure 4 for A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via Adversarial Fine-tuning
Viaarxiv icon

AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via Visual Attention Condensers

Add code
Bookmark button
Alert button
Sep 30, 2020
Alexander Wong, Mahmoud Famouri, Mohammad Javad Shafiee

Figure 1 for AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via Visual Attention Condensers
Figure 2 for AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via Visual Attention Condensers
Figure 3 for AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via Visual Attention Condensers
Viaarxiv icon

Vulnerability Under Adversarial Machine Learning: Bias or Variance?

Add code
Bookmark button
Alert button
Aug 01, 2020
Hossein Aboutalebi, Mohammad Javad Shafiee, Michelle Karg, Christian Scharfenberger, Alexander Wong

Figure 1 for Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Figure 2 for Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Figure 3 for Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Figure 4 for Vulnerability Under Adversarial Machine Learning: Bias or Variance?
Viaarxiv icon

Deep Neural Network Perception Models and Robust Autonomous Driving Systems

Add code
Bookmark button
Alert button
Mar 04, 2020
Mohammad Javad Shafiee, Ahmadreza Jeddi, Amir Nazemi, Paul Fieguth, Alexander Wong

Figure 1 for Deep Neural Network Perception Models and Robust Autonomous Driving Systems
Figure 2 for Deep Neural Network Perception Models and Robust Autonomous Driving Systems
Figure 3 for Deep Neural Network Perception Models and Robust Autonomous Driving Systems
Figure 4 for Deep Neural Network Perception Models and Robust Autonomous Driving Systems
Viaarxiv icon

Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness

Add code
Bookmark button
Alert button
Mar 03, 2020
Ahmadreza Jeddi, Mohammad Javad Shafiee, Michelle Karg, Christian Scharfenberger, Alexander Wong

Figure 1 for Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Figure 2 for Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Figure 3 for Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Figure 4 for Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Viaarxiv icon

Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

Add code
Bookmark button
Alert button
Oct 29, 2019
Zhong Qiu Lin, Mohammad Javad Shafiee, Stanislav Bochkarev, Michael St. Jules, Xiao Yu Wang, Alexander Wong

Figure 1 for Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Figure 2 for Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Figure 3 for Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Figure 4 for Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Viaarxiv icon

Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

Add code
Bookmark button
Alert button
Oct 16, 2019
Zhong Qiu Lin, Mohammad Javad Shafiee, Stanislav Bochkarev, Michael St. Jules, Xiao Yu Wang, Alexander Wong

Figure 1 for Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Figure 2 for Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Figure 3 for Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Figure 4 for Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Viaarxiv icon

State of Compact Architecture Search For Deep Neural Networks

Add code
Bookmark button
Alert button
Oct 15, 2019
Mohammad Javad Shafiee, Andrew Hryniowski, Francis Li, Zhong Qiu Lin, Alexander Wong

Figure 1 for State of Compact Architecture Search For Deep Neural Networks
Viaarxiv icon

YOLO Nano: a Highly Compact You Only Look Once Convolutional Neural Network for Object Detection

Add code
Bookmark button
Alert button
Oct 03, 2019
Alexander Wong, Mahmoud Famuori, Mohammad Javad Shafiee, Francis Li, Brendan Chwyl, Jonathan Chung

Figure 1 for YOLO Nano: a Highly Compact You Only Look Once Convolutional Neural Network for Object Detection
Figure 2 for YOLO Nano: a Highly Compact You Only Look Once Convolutional Neural Network for Object Detection
Viaarxiv icon