Alert button
Picture for Aws Albarghouthi

Aws Albarghouthi

Alert button

Introduction to Neural Network Verification

Sep 21, 2021
Aws Albarghouthi

Figure 1 for Introduction to Neural Network Verification
Figure 2 for Introduction to Neural Network Verification
Figure 3 for Introduction to Neural Network Verification
Figure 4 for Introduction to Neural Network Verification
Viaarxiv icon

Certified Robustness to Programmable Transformations in LSTMs

Feb 15, 2021
Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Figure 1 for Certified Robustness to Programmable Transformations in LSTMs
Figure 2 for Certified Robustness to Programmable Transformations in LSTMs
Figure 3 for Certified Robustness to Programmable Transformations in LSTMs
Figure 4 for Certified Robustness to Programmable Transformations in LSTMs
Viaarxiv icon

Learning Differentially Private Mechanisms

Jan 04, 2021
Subhajit Roy, Justin Hsu, Aws Albarghouthi

Figure 1 for Learning Differentially Private Mechanisms
Figure 2 for Learning Differentially Private Mechanisms
Figure 3 for Learning Differentially Private Mechanisms
Figure 4 for Learning Differentially Private Mechanisms
Viaarxiv icon

Abstract Universal Approximation for Neural Networks

Jul 14, 2020
Zi Wang, Aws Albarghouthi, Somesh Jha

Figure 1 for Abstract Universal Approximation for Neural Networks
Figure 2 for Abstract Universal Approximation for Neural Networks
Figure 3 for Abstract Universal Approximation for Neural Networks
Figure 4 for Abstract Universal Approximation for Neural Networks
Viaarxiv icon

Backdoors in Neural Models of Source Code

Jun 11, 2020
Goutham Ramakrishnan, Aws Albarghouthi

Figure 1 for Backdoors in Neural Models of Source Code
Figure 2 for Backdoors in Neural Models of Source Code
Figure 3 for Backdoors in Neural Models of Source Code
Figure 4 for Backdoors in Neural Models of Source Code
Viaarxiv icon

Robustness to Programmable String Transformations via Augmented Abstract Training

Feb 22, 2020
Yuhao Zhang, Aws Albarghouthi, Loris D'Antoni

Figure 1 for Robustness to Programmable String Transformations via Augmented Abstract Training
Figure 2 for Robustness to Programmable String Transformations via Augmented Abstract Training
Figure 3 for Robustness to Programmable String Transformations via Augmented Abstract Training
Figure 4 for Robustness to Programmable String Transformations via Augmented Abstract Training
Viaarxiv icon

Semantic Robustness of Models of Source Code

Feb 07, 2020
Goutham Ramakrishnan, Jordan Henkel, Zi Wang, Aws Albarghouthi, Somesh Jha, Thomas Reps

Figure 1 for Semantic Robustness of Models of Source Code
Figure 2 for Semantic Robustness of Models of Source Code
Figure 3 for Semantic Robustness of Models of Source Code
Figure 4 for Semantic Robustness of Models of Source Code
Viaarxiv icon

Proving Data-Poisoning Robustness in Decision Trees

Dec 02, 2019
Samuel Drews, Aws Albarghouthi, Loris D'Antoni

Figure 1 for Proving Data-Poisoning Robustness in Decision Trees
Figure 2 for Proving Data-Poisoning Robustness in Decision Trees
Figure 3 for Proving Data-Poisoning Robustness in Decision Trees
Figure 4 for Proving Data-Poisoning Robustness in Decision Trees
Viaarxiv icon

Synthesizing Action Sequences for Modifying Model Decisions

Oct 09, 2019
Goutham Ramakrishnan, Yun Chan Lee, Aws Albarghouthi

Figure 1 for Synthesizing Action Sequences for Modifying Model Decisions
Figure 2 for Synthesizing Action Sequences for Modifying Model Decisions
Figure 3 for Synthesizing Action Sequences for Modifying Model Decisions
Figure 4 for Synthesizing Action Sequences for Modifying Model Decisions
Viaarxiv icon