Alert button
Picture for Hanxu Zhou

Hanxu Zhou

Alert button

Understanding Time Series Anomaly State Detection through One-Class Classification

Add code
Bookmark button
Alert button
Feb 03, 2024
Hanxu Zhou, Yuan Zhang, Guangjie Leng, Ruofan Wang, Zhi-Qin John Xu

Viaarxiv icon

Understanding the Initial Condensation of Convolutional Neural Networks

Add code
Bookmark button
Alert button
May 17, 2023
Zhangchen Zhou, Hanxu Zhou, Yuqing Li, Zhi-Qin John Xu

Figure 1 for Understanding the Initial Condensation of Convolutional Neural Networks
Figure 2 for Understanding the Initial Condensation of Convolutional Neural Networks
Figure 3 for Understanding the Initial Condensation of Convolutional Neural Networks
Figure 4 for Understanding the Initial Condensation of Convolutional Neural Networks
Viaarxiv icon

Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width

Add code
Bookmark button
Alert button
May 24, 2022
Hanxu Zhou, Qixuan Zhou, Zhenyuan Jin, Tao Luo, Yaoyu Zhang, Zhi-Qin John Xu

Figure 1 for Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width
Figure 2 for Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width
Figure 3 for Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width
Figure 4 for Empirical Phase Diagram for Three-layer Neural Networks with Infinite Width
Viaarxiv icon

A variance principle explains why dropout finds flatter minima

Add code
Bookmark button
Alert button
Nov 01, 2021
Zhongwang Zhang, Hanxu Zhou, Zhi-Qin John Xu

Figure 1 for A variance principle explains why dropout finds flatter minima
Figure 2 for A variance principle explains why dropout finds flatter minima
Figure 3 for A variance principle explains why dropout finds flatter minima
Figure 4 for A variance principle explains why dropout finds flatter minima
Viaarxiv icon

Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training

Add code
Bookmark button
Alert button
May 29, 2021
Zhi-Qin John Xu, Hanxu Zhou, Tao Luo, Yaoyu Zhang

Figure 1 for Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training
Figure 2 for Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training
Figure 3 for Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training
Figure 4 for Towards Understanding the Condensation of Two-layer Neural Networks at Initial Training
Viaarxiv icon

Deep frequency principle towards understanding why deeper learning is faster

Add code
Bookmark button
Alert button
Jul 28, 2020
Zhi-Qin John Xu, Hanxu Zhou

Figure 1 for Deep frequency principle towards understanding why deeper learning is faster
Figure 2 for Deep frequency principle towards understanding why deeper learning is faster
Figure 3 for Deep frequency principle towards understanding why deeper learning is faster
Figure 4 for Deep frequency principle towards understanding why deeper learning is faster
Viaarxiv icon