Alert button
Picture for Tao Wang

Tao Wang

Alert button

FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning

Apr 16, 2021
Bo Zhao, Peng Sun, Liming Fang, Tao Wang, Keyu Jiang

Figure 1 for FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning
Figure 2 for FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning
Figure 3 for FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning
Figure 4 for FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning
Viaarxiv icon

Half-Truth: A Partially Fake Audio Detection Dataset

Apr 08, 2021
Jiangyan Yi, Ye Bai, Jianhua Tao, Zhengkun Tian, Chenglong Wang, Tao Wang, Ruibo Fu

Figure 1 for Half-Truth: A Partially Fake Audio Detection Dataset
Figure 2 for Half-Truth: A Partially Fake Audio Detection Dataset
Figure 3 for Half-Truth: A Partially Fake Audio Detection Dataset
Figure 4 for Half-Truth: A Partially Fake Audio Detection Dataset
Viaarxiv icon

IDOL-Net: An Interactive Dual-Domain Parallel Network for CT Metal Artifact Reduction

Apr 03, 2021
Tao Wang, Wenjun Xia, Zexin Lu, Huaiqiang Sun, Yan Liu, Hu Chen, Jiliu Zhou, Yi Zhang

Figure 1 for IDOL-Net: An Interactive Dual-Domain Parallel Network for CT Metal Artifact Reduction
Figure 2 for IDOL-Net: An Interactive Dual-Domain Parallel Network for CT Metal Artifact Reduction
Figure 3 for IDOL-Net: An Interactive Dual-Domain Parallel Network for CT Metal Artifact Reduction
Figure 4 for IDOL-Net: An Interactive Dual-Domain Parallel Network for CT Metal Artifact Reduction
Viaarxiv icon

Auto Correcting in the Process of Translation -- Multi-task Learning Improves Dialogue Machine Translation

Mar 30, 2021
Tao Wang, Chengqi Zhao, Mingxuan Wang, Lei Li, Deyi Xiong

Figure 1 for Auto Correcting in the Process of Translation -- Multi-task Learning Improves Dialogue Machine Translation
Figure 2 for Auto Correcting in the Process of Translation -- Multi-task Learning Improves Dialogue Machine Translation
Figure 3 for Auto Correcting in the Process of Translation -- Multi-task Learning Improves Dialogue Machine Translation
Figure 4 for Auto Correcting in the Process of Translation -- Multi-task Learning Improves Dialogue Machine Translation
Viaarxiv icon

A Universal Model for Cross Modality Mapping by Relational Reasoning

Feb 26, 2021
Zun Li, Congyan Lang, Liqian Liang, Tao Wang, Songhe Feng, Jun Wu, Yidong Li

Figure 1 for A Universal Model for Cross Modality Mapping by Relational Reasoning
Figure 2 for A Universal Model for Cross Modality Mapping by Relational Reasoning
Figure 3 for A Universal Model for Cross Modality Mapping by Relational Reasoning
Figure 4 for A Universal Model for Cross Modality Mapping by Relational Reasoning
Viaarxiv icon

Attention Models for Point Clouds in Deep Learning: A Survey

Feb 22, 2021
Xu Wang, Yi Jin, Yigang Cen, Tao Wang, Yidong Li

Viaarxiv icon

DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal Artifact Reduction

Feb 16, 2021
Tao Wang, Wenjun Xia, Yongqiang Huang, Huaiqiang Sun, Yan Liu, Hu Chen, Jiliu Zhou, Yi Zhang

Figure 1 for DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal Artifact Reduction
Figure 2 for DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal Artifact Reduction
Figure 3 for DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal Artifact Reduction
Figure 4 for DAN-Net: Dual-Domain Adaptive-Scaling Non-local Network for CT Metal Artifact Reduction
Viaarxiv icon

Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet

Jan 28, 2021
Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Francis EH Tay, Jiashi Feng, Shuicheng Yan

Figure 1 for Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Figure 2 for Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Figure 3 for Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Figure 4 for Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Viaarxiv icon

An Investigation of Potential Function Designs for Neural CRF

Nov 11, 2020
Zechuan Hu, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Kewei Tu

Figure 1 for An Investigation of Potential Function Designs for Neural CRF
Figure 2 for An Investigation of Potential Function Designs for Neural CRF
Figure 3 for An Investigation of Potential Function Designs for Neural CRF
Figure 4 for An Investigation of Potential Function Designs for Neural CRF
Viaarxiv icon

Exploring the limits of Concurrency in ML Training on Google TPUs

Nov 07, 2020
Sameer Kumar, James Bradbury, Cliff Young, Yu Emma Wang, Anselm Levskaya, Blake Hechtman, Dehao Chen, HyoukJoong Lee, Mehmet Deveci, Naveen Kumar, Pankaj Kanwar, Shibo Wang, Skye Wanderman-Milne, Steve Lacy, Tao Wang, Tayo Oguntebi, Yazhou Zu, Yuanzhong Xu, Andy Swing

Figure 1 for Exploring the limits of Concurrency in ML Training on Google TPUs
Figure 2 for Exploring the limits of Concurrency in ML Training on Google TPUs
Figure 3 for Exploring the limits of Concurrency in ML Training on Google TPUs
Figure 4 for Exploring the limits of Concurrency in ML Training on Google TPUs
Viaarxiv icon