Alert button
Picture for Shijun Zhang

Shijun Zhang

Alert button

Deep Network Approximation: Beyond ReLU to Diverse Activation Functions

Add code
Bookmark button
Alert button
Jul 13, 2023
Shijun Zhang, Jianfeng Lu, Hongkai Zhao

Figure 1 for Deep Network Approximation: Beyond ReLU to Diverse Activation Functions
Figure 2 for Deep Network Approximation: Beyond ReLU to Diverse Activation Functions
Figure 3 for Deep Network Approximation: Beyond ReLU to Diverse Activation Functions
Figure 4 for Deep Network Approximation: Beyond ReLU to Diverse Activation Functions
Viaarxiv icon

Why Shallow Networks Struggle with Approximating and Learning High Frequency: A Numerical Study

Add code
Bookmark button
Alert button
Jun 29, 2023
Shijun Zhang, Hongkai Zhao, Yimin Zhong, Haomin Zhou

Viaarxiv icon

On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network

Add code
Bookmark button
Alert button
Jan 29, 2023
Shijun Zhang, Jianfeng Lu, Hongkai Zhao

Figure 1 for On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network
Figure 2 for On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network
Figure 3 for On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network
Figure 4 for On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network
Viaarxiv icon

Neural Network Architecture Beyond Width and Depth

Add code
Bookmark button
Alert button
May 19, 2022
Zuowei Shen, Haizhao Yang, Shijun Zhang

Figure 1 for Neural Network Architecture Beyond Width and Depth
Figure 2 for Neural Network Architecture Beyond Width and Depth
Figure 3 for Neural Network Architecture Beyond Width and Depth
Figure 4 for Neural Network Architecture Beyond Width and Depth
Viaarxiv icon

ReLU Network Approximation in Terms of Intrinsic Parameters

Add code
Bookmark button
Alert button
Nov 15, 2021
Zuowei Shen, Haizhao Yang, Shijun Zhang

Figure 1 for ReLU Network Approximation in Terms of Intrinsic Parameters
Figure 2 for ReLU Network Approximation in Terms of Intrinsic Parameters
Figure 3 for ReLU Network Approximation in Terms of Intrinsic Parameters
Figure 4 for ReLU Network Approximation in Terms of Intrinsic Parameters
Viaarxiv icon

Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons

Add code
Bookmark button
Alert button
Jul 07, 2021
Zuowei Shen, Haizhao Yang, Shijun Zhang

Figure 1 for Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Figure 2 for Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Figure 3 for Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Figure 4 for Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons
Viaarxiv icon

Deep Network Approximation With Accuracy Independent of Number of Neurons

Add code
Bookmark button
Alert button
Jul 06, 2021
Zuowei Shen, Haizhao Yang, Shijun Zhang

Figure 1 for Deep Network Approximation With Accuracy Independent of Number of Neurons
Figure 2 for Deep Network Approximation With Accuracy Independent of Number of Neurons
Figure 3 for Deep Network Approximation With Accuracy Independent of Number of Neurons
Figure 4 for Deep Network Approximation With Accuracy Independent of Number of Neurons
Viaarxiv icon

Optimal Approximation Rate of ReLU Networks in terms of Width and Depth

Add code
Bookmark button
Alert button
Feb 28, 2021
Zuowei Shen, Haizhao Yang, Shijun Zhang

Figure 1 for Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Figure 2 for Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Figure 3 for Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Figure 4 for Optimal Approximation Rate of ReLU Networks in terms of Width and Depth
Viaarxiv icon

Neural Network Approximation: Three Hidden Layers Are Enough

Add code
Bookmark button
Alert button
Oct 25, 2020
Zuowei Shen, Haizhao Yang, Shijun Zhang

Figure 1 for Neural Network Approximation: Three Hidden Layers Are Enough
Figure 2 for Neural Network Approximation: Three Hidden Layers Are Enough
Figure 3 for Neural Network Approximation: Three Hidden Layers Are Enough
Viaarxiv icon

Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth

Add code
Bookmark button
Alert button
Jun 22, 2020
Zuowei Shen, Haizhao Yang, Shijun Zhang

Figure 1 for Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth
Figure 2 for Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth
Figure 3 for Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth
Figure 4 for Deep Network Approximation with Discrepancy Being Reciprocal of Width to Power of Depth
Viaarxiv icon