Alert button
Picture for Zuowen Wang

Zuowen Wang

Alert button

Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training

Add code
Bookmark button
Alert button
Dec 14, 2023
Xi Chen, Chang Gao, Zuowen Wang, Longbiao Cheng, Sheng Zhou, Shih-Chii Liu, Tobi Delbruck

Viaarxiv icon

3ET: Efficient Event-based Eye Tracking using a Change-Based ConvLSTM Network

Add code
Bookmark button
Alert button
Aug 22, 2023
Qinyu Chen, Zuowen Wang, Shih-Chii Liu, Chang Gao

Viaarxiv icon

Exploiting Spatial Sparsity for Event Cameras with Visual Transformers

Add code
Bookmark button
Alert button
Feb 10, 2022
Zuowen Wang, Yuhuang Hu, Shih-Chii Liu

Figure 1 for Exploiting Spatial Sparsity for Event Cameras with Visual Transformers
Figure 2 for Exploiting Spatial Sparsity for Event Cameras with Visual Transformers
Figure 3 for Exploiting Spatial Sparsity for Event Cameras with Visual Transformers
Figure 4 for Exploiting Spatial Sparsity for Event Cameras with Visual Transformers
Viaarxiv icon

Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks

Add code
Bookmark button
Alert button
Apr 04, 2020
Zuowen Wang, Leo Horne

Figure 1 for Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks
Figure 2 for Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks
Figure 3 for Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks
Figure 4 for Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks
Viaarxiv icon

Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness

Add code
Bookmark button
Alert button
Jun 26, 2019
Fanny Yang, Zuowen Wang, Christina Heinze-Deml

Figure 1 for Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Figure 2 for Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Figure 3 for Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Figure 4 for Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Viaarxiv icon