Picture for Di He

Di He

Rethinking the Expressive Power of GNNs via Graph Biconnectivity

Add code
Jan 23, 2023
Viaarxiv icon

DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets

Add code
Jan 15, 2023
Viaarxiv icon

Matching entropy based disparity estimation from light field

Add code
Oct 28, 2022
Viaarxiv icon

Online Training Through Time for Spiking Neural Networks

Add code
Oct 09, 2022
Figure 1 for Online Training Through Time for Spiking Neural Networks
Figure 2 for Online Training Through Time for Spiking Neural Networks
Figure 3 for Online Training Through Time for Spiking Neural Networks
Figure 4 for Online Training Through Time for Spiking Neural Networks
Viaarxiv icon

One Transformer Can Understand Both 2D & 3D Molecular Data

Add code
Oct 04, 2022
Figure 1 for One Transformer Can Understand Both 2D & 3D Molecular Data
Figure 2 for One Transformer Can Understand Both 2D & 3D Molecular Data
Figure 3 for One Transformer Can Understand Both 2D & 3D Molecular Data
Figure 4 for One Transformer Can Understand Both 2D & 3D Molecular Data
Viaarxiv icon

Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness

Add code
Oct 04, 2022
Figure 1 for Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
Figure 2 for Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
Figure 3 for Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
Figure 4 for Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
Viaarxiv icon

Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks

Add code
Jun 09, 2022
Figure 1 for Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks
Figure 2 for Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks
Viaarxiv icon

Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?

Add code
Jun 04, 2022
Figure 1 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 2 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 3 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 4 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Viaarxiv icon

Your Transformer May Not be as Powerful as You Expect

Add code
May 26, 2022
Figure 1 for Your Transformer May Not be as Powerful as You Expect
Figure 2 for Your Transformer May Not be as Powerful as You Expect
Figure 3 for Your Transformer May Not be as Powerful as You Expect
Figure 4 for Your Transformer May Not be as Powerful as You Expect
Viaarxiv icon

METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals

Add code
Apr 16, 2022
Figure 1 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 2 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 3 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 4 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Viaarxiv icon