Alert button
Picture for Di He

Di He

Alert button

DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets

Add code
Bookmark button
Alert button
Jan 15, 2023
Haiyang Wang, Chen Shi, Shaoshuai Shi, Meng Lei, Sen Wang, Di He, Bernt Schiele, Liwei Wang

Figure 1 for DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets
Figure 2 for DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets
Figure 3 for DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets
Figure 4 for DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets
Viaarxiv icon

Matching entropy based disparity estimation from light field

Add code
Bookmark button
Alert button
Oct 28, 2022
Ligen Shi, Chang Liu, Di He, Xing Zhao, Jun Qiu

Figure 1 for Matching entropy based disparity estimation from light field
Figure 2 for Matching entropy based disparity estimation from light field
Figure 3 for Matching entropy based disparity estimation from light field
Figure 4 for Matching entropy based disparity estimation from light field
Viaarxiv icon

Online Training Through Time for Spiking Neural Networks

Add code
Bookmark button
Alert button
Oct 09, 2022
Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Di He, Zhouchen Lin

Figure 1 for Online Training Through Time for Spiking Neural Networks
Figure 2 for Online Training Through Time for Spiking Neural Networks
Figure 3 for Online Training Through Time for Spiking Neural Networks
Figure 4 for Online Training Through Time for Spiking Neural Networks
Viaarxiv icon

Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness

Add code
Bookmark button
Alert button
Oct 04, 2022
Bohang Zhang, Du Jiang, Di He, Liwei Wang

Figure 1 for Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
Figure 2 for Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
Figure 3 for Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
Figure 4 for Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness
Viaarxiv icon

One Transformer Can Understand Both 2D & 3D Molecular Data

Add code
Bookmark button
Alert button
Oct 04, 2022
Shengjie Luo, Tianlang Chen, Yixian Xu, Shuxin Zheng, Tie-Yan Liu, Liwei Wang, Di He

Figure 1 for One Transformer Can Understand Both 2D & 3D Molecular Data
Figure 2 for One Transformer Can Understand Both 2D & 3D Molecular Data
Figure 3 for One Transformer Can Understand Both 2D & 3D Molecular Data
Figure 4 for One Transformer Can Understand Both 2D & 3D Molecular Data
Viaarxiv icon

Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks

Add code
Bookmark button
Alert button
Jun 09, 2022
Huishuai Zhang, Da Yu, Yiping Lu, Di He

Figure 1 for Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks
Figure 2 for Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks
Viaarxiv icon

Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?

Add code
Bookmark button
Alert button
Jun 04, 2022
Chuwei Wang, Shanda Li, Di He, Liwei Wang

Figure 1 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 2 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 3 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Figure 4 for Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?
Viaarxiv icon

Your Transformer May Not be as Powerful as You Expect

Add code
Bookmark button
Alert button
May 26, 2022
Shengjie Luo, Shanda Li, Shuxin Zheng, Tie-Yan Liu, Liwei Wang, Di He

Figure 1 for Your Transformer May Not be as Powerful as You Expect
Figure 2 for Your Transformer May Not be as Powerful as You Expect
Figure 3 for Your Transformer May Not be as Powerful as You Expect
Figure 4 for Your Transformer May Not be as Powerful as You Expect
Viaarxiv icon

METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals

Add code
Bookmark button
Alert button
Apr 16, 2022
Payal Bajaj, Chenyan Xiong, Guolin Ke, Xiaodong Liu, Di He, Saurabh Tiwary, Tie-Yan Liu, Paul Bennett, Xia Song, Jianfeng Gao

Figure 1 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 2 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 3 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Figure 4 for METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals
Viaarxiv icon