Alert button
Picture for Jirong Yi

Jirong Yi

Alert button

Outlier Detection Using Generative Models with Theoretical Performance Guarantees

Add code
Bookmark button
Alert button
Oct 16, 2023
Jirong Yi, Jingchao Gao, Tianming Wang, Xiaodong Wu, Weiyu Xu

Viaarxiv icon

Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems

Add code
Bookmark button
Alert button
Nov 23, 2022
Jirong Yi, Qiaosheng Zhang, Zhen Chen, Qiao Liu, Wei Shao, Yusen He, Yaohua Wang

Figure 1 for Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems
Figure 2 for Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems
Viaarxiv icon

Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems

Add code
Bookmark button
Alert button
Oct 03, 2022
Jirong Yi, Qiaosheng Zhang, Zhen Chen, Qiao Liu, Wei Shao

Figure 1 for Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems
Figure 2 for Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems
Figure 3 for Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems
Figure 4 for Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems
Viaarxiv icon

Solving Large Scale Quadratic Constrained Basis Pursuit

Add code
Bookmark button
Alert button
Apr 02, 2021
Jirong Yi

Figure 1 for Solving Large Scale Quadratic Constrained Basis Pursuit
Figure 2 for Solving Large Scale Quadratic Constrained Basis Pursuit
Viaarxiv icon

Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning

Add code
Bookmark button
Alert button
Jul 28, 2020
Jirong Yi, Raghu Mudumbai, Weiyu Xu

Figure 1 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 2 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 3 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 4 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Viaarxiv icon

Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks

Add code
Bookmark button
Alert button
Mar 26, 2020
Zain Khan, Jirong Yi, Raghu Mudumbai, Xiaodong Wu, Weiyu Xu

Figure 1 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 2 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 3 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 4 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Viaarxiv icon

Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks

Add code
Bookmark button
Alert button
May 25, 2019
Jirong Yi, Hui Xie, Leixin Zhou, Xiaodong Wu, Weiyu Xu, Raghuraman Mudumbai

Figure 1 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 2 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 3 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 4 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Viaarxiv icon

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

Add code
Bookmark button
Alert button
Jan 27, 2019
Hui Xie, Jirong Yi, Weiyu Xu, Raghu Mudumbai

Figure 1 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 2 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 3 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 4 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Viaarxiv icon

Outlier Detection using Generative Models with Theoretical Performance Guarantees

Add code
Bookmark button
Alert button
Oct 26, 2018
Jirong Yi, Anh Duc Le, Tianming Wang, Xiaodong Wu, Weiyu Xu

Figure 1 for Outlier Detection using Generative Models with Theoretical Performance Guarantees
Figure 2 for Outlier Detection using Generative Models with Theoretical Performance Guarantees
Figure 3 for Outlier Detection using Generative Models with Theoretical Performance Guarantees
Figure 4 for Outlier Detection using Generative Models with Theoretical Performance Guarantees
Viaarxiv icon