Picture for Chongli Qin

Chongli Qin

Dj

On a continuous time model of gradient descent dynamics and instability in deep learning

Add code
Feb 03, 2023
Figure 1 for On a continuous time model of gradient descent dynamics and instability in deep learning
Figure 2 for On a continuous time model of gradient descent dynamics and instability in deep learning
Figure 3 for On a continuous time model of gradient descent dynamics and instability in deep learning
Figure 4 for On a continuous time model of gradient descent dynamics and instability in deep learning
Viaarxiv icon

Training Generative Adversarial Networks by Solving Ordinary Differential Equations

Add code
Oct 28, 2020
Figure 1 for Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Figure 2 for Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Figure 3 for Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Figure 4 for Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Viaarxiv icon

Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples

Add code
Oct 27, 2020
Figure 1 for Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Figure 2 for Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Figure 3 for Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Figure 4 for Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Viaarxiv icon

Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations

Add code
Dec 06, 2019
Figure 1 for Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations
Figure 2 for Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations
Figure 3 for Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations
Figure 4 for Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations
Viaarxiv icon

An Alternative Surrogate Loss for PGD-based Adversarial Testing

Add code
Oct 21, 2019
Figure 1 for An Alternative Surrogate Loss for PGD-based Adversarial Testing
Figure 2 for An Alternative Surrogate Loss for PGD-based Adversarial Testing
Figure 3 for An Alternative Surrogate Loss for PGD-based Adversarial Testing
Figure 4 for An Alternative Surrogate Loss for PGD-based Adversarial Testing
Viaarxiv icon

Adversarial Robustness through Local Linearization

Add code
Jul 04, 2019
Figure 1 for Adversarial Robustness through Local Linearization
Figure 2 for Adversarial Robustness through Local Linearization
Figure 3 for Adversarial Robustness through Local Linearization
Figure 4 for Adversarial Robustness through Local Linearization
Viaarxiv icon

Verification of Non-Linear Specifications for Neural Networks

Add code
Feb 25, 2019
Figure 1 for Verification of Non-Linear Specifications for Neural Networks
Figure 2 for Verification of Non-Linear Specifications for Neural Networks
Figure 3 for Verification of Non-Linear Specifications for Neural Networks
Figure 4 for Verification of Non-Linear Specifications for Neural Networks
Viaarxiv icon

On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models

Add code
Nov 05, 2018
Figure 1 for On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Figure 2 for On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Figure 3 for On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Figure 4 for On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Viaarxiv icon