Alert button
Picture for Ji Gao

Ji Gao

Alert button

Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning

Add code
Bookmark button
Alert button
Feb 07, 2022
Ji Gao, Sanjam Garg, Mohammad Mahmoody, Prashant Nalini Vasudevan

Figure 1 for Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Figure 2 for Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Figure 3 for Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Figure 4 for Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Viaarxiv icon

Spotting adversarial samples for speaker verification by neural vocoders

Add code
Bookmark button
Alert button
Jul 02, 2021
Haibin Wu, Po-chun Hsu, Ji Gao, Shanshan Zhang, Shen Huang, Jian Kang, Zhiyong Wu, Helen Meng, Hung-yi Lee

Figure 1 for Spotting adversarial samples for speaker verification by neural vocoders
Figure 2 for Spotting adversarial samples for speaker verification by neural vocoders
Figure 3 for Spotting adversarial samples for speaker verification by neural vocoders
Figure 4 for Spotting adversarial samples for speaker verification by neural vocoders
Viaarxiv icon

Learning and Certification under Instance-targeted Poisoning

Add code
Bookmark button
Alert button
May 18, 2021
Ji Gao, Amin Karbasi, Mohammad Mahmoody

Figure 1 for Learning and Certification under Instance-targeted Poisoning
Figure 2 for Learning and Certification under Instance-targeted Poisoning
Figure 3 for Learning and Certification under Instance-targeted Poisoning
Viaarxiv icon

Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers

Add code
Bookmark button
Alert button
May 23, 2018
Ji Gao, Jack Lanchantin, Mary Lou Soffa, Yanjun Qi

Figure 1 for Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
Figure 2 for Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
Figure 3 for Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
Figure 4 for Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
Viaarxiv icon

Exploring the Naturalness of Buggy Code with Recurrent Neural Networks

Add code
Bookmark button
Alert button
Mar 21, 2018
Jack Lanchantin, Ji Gao

Figure 1 for Exploring the Naturalness of Buggy Code with Recurrent Neural Networks
Figure 2 for Exploring the Naturalness of Buggy Code with Recurrent Neural Networks
Figure 3 for Exploring the Naturalness of Buggy Code with Recurrent Neural Networks
Figure 4 for Exploring the Naturalness of Buggy Code with Recurrent Neural Networks
Viaarxiv icon

A Fast and Scalable Joint Estimator for Learning Multiple Related Sparse Gaussian Graphical Models

Add code
Bookmark button
Alert button
Mar 20, 2018
Beilun Wang, Ji Gao, Yanjun Qi

Figure 1 for A Fast and Scalable Joint Estimator for Learning Multiple Related Sparse Gaussian Graphical Models
Figure 2 for A Fast and Scalable Joint Estimator for Learning Multiple Related Sparse Gaussian Graphical Models
Figure 3 for A Fast and Scalable Joint Estimator for Learning Multiple Related Sparse Gaussian Graphical Models
Viaarxiv icon

A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples

Add code
Bookmark button
Alert button
Sep 27, 2017
Beilun Wang, Ji Gao, Yanjun Qi

Figure 1 for A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples
Figure 2 for A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples
Figure 3 for A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples
Figure 4 for A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Examples
Viaarxiv icon

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

Add code
Bookmark button
Alert button
Apr 17, 2017
Ji Gao, Beilun Wang, Zeming Lin, Weilin Xu, Yanjun Qi

Figure 1 for DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Figure 2 for DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Figure 3 for DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Figure 4 for DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Viaarxiv icon