Alert button
Picture for Karthik Pattabiraman

Karthik Pattabiraman

Alert button

Systematically Assessing the Security Risks of AI/ML-enabled Connected Healthcare Systems

Jan 30, 2024
Mohammed Elnawawy, Mohammadreza Hallajiyan, Gargi Mitra, Shahrear Iqbal, Karthik Pattabiraman

Viaarxiv icon

A Low-cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks

Oct 31, 2023
Florian Geissler, Syed Qutub, Michael Paulitsch, Karthik Pattabiraman

Viaarxiv icon

Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction

Jul 04, 2023
Zitao Chen, Karthik Pattabiraman

Figure 1 for Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
Figure 2 for Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
Figure 3 for Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
Figure 4 for Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
Viaarxiv icon

Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks

Sep 17, 2022
Pritam Dash, Guanpeng Li, Mehdi Karimibiuki, Karthik Pattabiraman

Figure 1 for Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 2 for Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 3 for Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 4 for Replay-based Recovery for Autonomous Robotic Vehicles from Sensor Deception Attacks
Viaarxiv icon

Replay based for Recovering Autonomous Robotic Vehicles from Sensor Deception Attacks

Sep 09, 2022
Pritam Dash, Guanpeng Li, Mehdi Karimibiuki, Karthik Pattabiraman

Figure 1 for Replay based for Recovering Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 2 for Replay based for Recovering Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 3 for Replay based for Recovering Autonomous Robotic Vehicles from Sensor Deception Attacks
Figure 4 for Replay based for Recovering Autonomous Robotic Vehicles from Sensor Deception Attacks
Viaarxiv icon

Characterizing and Improving the Resilience of Accelerators in Autonomous Robots

Oct 17, 2021
Deval Shah, Zi Yu Xue, Karthik Pattabiraman, Tor M. Aamodt

Figure 1 for Characterizing and Improving the Resilience of Accelerators in Autonomous Robots
Figure 2 for Characterizing and Improving the Resilience of Accelerators in Autonomous Robots
Figure 3 for Characterizing and Improving the Resilience of Accelerators in Autonomous Robots
Figure 4 for Characterizing and Improving the Resilience of Accelerators in Autonomous Robots
Viaarxiv icon

Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision

Aug 16, 2021
Florian Geissler, Syed Qutub, Sayanta Roychowdhury, Ali Asgari, Yang Peng, Akash Dhamasia, Ralf Graefe, Karthik Pattabiraman, Michael Paulitsch

Figure 1 for Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision
Figure 2 for Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision
Figure 3 for Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision
Figure 4 for Towards a Safety Case for Hardware Fault Tolerance in Convolutional Neural Networks Using Activation Range Supervision
Viaarxiv icon

Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack

Aug 11, 2021
Zitao Chen, Pritam Dash, Karthik Pattabiraman

Figure 1 for Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack
Figure 2 for Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack
Figure 3 for Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack
Figure 4 for Turning Your Strength against You: Detecting and Mitigating Robust and Universal Adversarial Patch Attack
Viaarxiv icon

TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications

Apr 03, 2020
Zitao Chen, Niranjhana Narayanan, Bo Fang, Guanpeng Li, Karthik Pattabiraman, Nathan DeBardeleben

Figure 1 for TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications
Figure 2 for TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications
Figure 3 for TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications
Figure 4 for TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications
Viaarxiv icon

Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction

Mar 30, 2020
Zitao Chen, Guanpeng Li, Karthik Pattabiraman

Figure 1 for Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction
Figure 2 for Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction
Figure 3 for Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction
Figure 4 for Ranger: Boosting Error Resilience of Deep Neural Networks through Range Restriction
Viaarxiv icon