Alert button
Picture for Zhanxing Zhu

Zhanxing Zhu

Alert button

Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability

Add code
Bookmark button
Alert button
Apr 01, 2023
Haoyi Xiong, Xuhong Li, Boyang Yu, Zhanxing Zhu, Dongrui Wu, Dejing Dou

Figure 1 for Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability
Figure 2 for Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability
Figure 3 for Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability
Figure 4 for Doubly Stochastic Models: Learning with Unbiased Label Noises and Inference Stability
Viaarxiv icon

MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations

Add code
Bookmark button
Alert button
Feb 03, 2023
Mingxuan Yi, Zhanxing Zhu, Song Liu

Figure 1 for MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations
Figure 2 for MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations
Figure 3 for MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations
Figure 4 for MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations
Viaarxiv icon

Fine-grained differentiable physics: a yarn-level model for fabrics

Add code
Bookmark button
Alert button
Feb 01, 2022
Deshan Gong, Zhanxing Zhu, Andrew J. Bulpitt, He Wang

Figure 1 for Fine-grained differentiable physics: a yarn-level model for fabrics
Figure 2 for Fine-grained differentiable physics: a yarn-level model for fabrics
Figure 3 for Fine-grained differentiable physics: a yarn-level model for fabrics
Figure 4 for Fine-grained differentiable physics: a yarn-level model for fabrics
Viaarxiv icon

Proceedings of ICML 2021 Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI

Add code
Bookmark button
Alert button
Jul 26, 2021
Quanshi Zhang, Tian Han, Lixin Fan, Zhanxing Zhu, Hang Su, Ying Nian Wu, Jie Ren, Hao Zhang

Viaarxiv icon

Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization

Add code
Bookmark button
Alert button
Mar 31, 2021
Zeke Xie, Li Yuan, Zhanxing Zhu, Masashi Sugiyama

Figure 1 for Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Figure 2 for Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Figure 3 for Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Figure 4 for Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization
Viaarxiv icon

Amata: An Annealing Mechanism for Adversarial Training Acceleration

Add code
Bookmark button
Alert button
Dec 15, 2020
Nanyang Ye, Qianxiao Li, Xiao-Yun Zhou, Zhanxing Zhu

Figure 1 for Amata: An Annealing Mechanism for Adversarial Training Acceleration
Figure 2 for Amata: An Annealing Mechanism for Adversarial Training Acceleration
Figure 3 for Amata: An Annealing Mechanism for Adversarial Training Acceleration
Figure 4 for Amata: An Annealing Mechanism for Adversarial Training Acceleration
Viaarxiv icon

Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher

Add code
Bookmark button
Alert button
Oct 20, 2020
Guangda Ji, Zhanxing Zhu

Figure 1 for Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Figure 2 for Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Figure 3 for Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Figure 4 for Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Viaarxiv icon

Neural Approximate Sufficient Statistics for Implicit Models

Add code
Bookmark button
Alert button
Oct 20, 2020
Yanzhi Chen, Dinghuai Zhang, Michael Gutmann, Aaron Courville, Zhanxing Zhu

Figure 1 for Neural Approximate Sufficient Statistics for Implicit Models
Figure 2 for Neural Approximate Sufficient Statistics for Implicit Models
Figure 3 for Neural Approximate Sufficient Statistics for Implicit Models
Figure 4 for Neural Approximate Sufficient Statistics for Implicit Models
Viaarxiv icon