Alert button
Picture for Kaiqi Zhang

Kaiqi Zhang

Alert button

Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks

Add code
Bookmark button
Alert button
Jul 04, 2023
Kaiqi Zhang, Zixuan Zhang, Minshuo Chen, Mengdi Wang, Tuo Zhao, Yu-Xiang Wang

Figure 1 for Nonparametric Classification on Low Dimensional Manifolds using Overparameterized Convolutional Residual Networks
Viaarxiv icon

Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks

Add code
Bookmark button
Alert button
Jun 13, 2022
Kaiqi Zhang, Ming Yin, Yu-Xiang Wang

Figure 1 for Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Figure 2 for Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Figure 3 for Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Figure 4 for Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Viaarxiv icon

Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?

Add code
Bookmark button
Alert button
Apr 21, 2022
Kaiqi Zhang, Yu-Xiang Wang

Figure 1 for Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
Figure 2 for Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
Figure 3 for Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
Figure 4 for Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?
Viaarxiv icon

3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration

Add code
Bookmark button
Alert button
May 11, 2021
Yao Chen, Cole Hawkins, Kaiqi Zhang, Zheng Zhang, Cong Hao

Figure 1 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 2 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 3 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 4 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Viaarxiv icon

Active Subspace of Neural Networks: Structural Analysis and Universal Attacks

Add code
Bookmark button
Alert button
Oct 29, 2019
Chunfeng Cui, Kaiqi Zhang, Talgat Daulbaev, Julia Gusak, Ivan Oseledets, Zheng Zhang

Figure 1 for Active Subspace of Neural Networks: Structural Analysis and Universal Attacks
Figure 2 for Active Subspace of Neural Networks: Structural Analysis and Universal Attacks
Figure 3 for Active Subspace of Neural Networks: Structural Analysis and Universal Attacks
Figure 4 for Active Subspace of Neural Networks: Structural Analysis and Universal Attacks
Viaarxiv icon

A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM

Add code
Bookmark button
Alert button
Nov 05, 2018
Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Jiaming Xie, Yun Liang, Sijia Liu, Xue Lin, Yanzhi Wang

Figure 1 for A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM
Figure 2 for A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM
Figure 3 for A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM
Figure 4 for A Unified Framework of DNN Weight Pruning and Weight Clustering/Quantization Using ADMM
Viaarxiv icon

Progressive Weight Pruning of Deep Neural Networks using ADMM

Add code
Bookmark button
Alert button
Nov 04, 2018
Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Kaidi Xu, Yunfei Yang, Fuxun Yu, Jian Tang, Makan Fardad, Sijia Liu, Xiang Chen, Xue Lin, Yanzhi Wang

Figure 1 for Progressive Weight Pruning of Deep Neural Networks using ADMM
Figure 2 for Progressive Weight Pruning of Deep Neural Networks using ADMM
Figure 3 for Progressive Weight Pruning of Deep Neural Networks using ADMM
Figure 4 for Progressive Weight Pruning of Deep Neural Networks using ADMM
Viaarxiv icon

ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs

Add code
Bookmark button
Alert button
Jul 29, 2018
Tianyun Zhang, Kaiqi Zhang, Shaokai Ye, Jiayu Li, Jian Tang, Wujie Wen, Xue Lin, Makan Fardad, Yanzhi Wang

Figure 1 for ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs
Figure 2 for ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs
Figure 3 for ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs
Figure 4 for ADAM-ADMM: A Unified, Systematic Framework of Structured Weight Pruning for DNNs
Viaarxiv icon

A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers

Add code
Bookmark button
Alert button
Jul 25, 2018
Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, Yanzhi Wang

Figure 1 for A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
Figure 2 for A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
Figure 3 for A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
Figure 4 for A Systematic DNN Weight Pruning Framework using Alternating Direction Method of Multipliers
Viaarxiv icon