Alert button
Picture for Yu Cheng

Yu Cheng

Alert button

SemAttack: Natural Textual Attacks via Different Semantic Spaces

Add code
Bookmark button
Alert button
May 16, 2022
Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, Bo Li

Figure 1 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 2 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 3 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Figure 4 for SemAttack: Natural Textual Attacks via Different Semantic Spaces
Viaarxiv icon

Dual networks based 3D Multi-Person Pose Estimation from Monocular Video

Add code
Bookmark button
Alert button
May 06, 2022
Yu Cheng, Bo Wang, Robby T. Tan

Figure 1 for Dual networks based 3D Multi-Person Pose Estimation from Monocular Video
Figure 2 for Dual networks based 3D Multi-Person Pose Estimation from Monocular Video
Figure 3 for Dual networks based 3D Multi-Person Pose Estimation from Monocular Video
Figure 4 for Dual networks based 3D Multi-Person Pose Estimation from Monocular Video
Viaarxiv icon

ZOOMER: Boosting Retrieval on Web-scale Graphs by Regions of Interest

Add code
Bookmark button
Alert button
Mar 20, 2022
Yuezihan Jiang, Yu Cheng, Hanyu Zhao, Wentao Zhang, Xupeng Miao, Yu He, Liang Wang, Zhi Yang, Bin Cui

Figure 1 for ZOOMER: Boosting Retrieval on Web-scale Graphs by Regions of Interest
Figure 2 for ZOOMER: Boosting Retrieval on Web-scale Graphs by Regions of Interest
Figure 3 for ZOOMER: Boosting Retrieval on Web-scale Graphs by Regions of Interest
Figure 4 for ZOOMER: Boosting Retrieval on Web-scale Graphs by Regions of Interest
Viaarxiv icon

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy

Add code
Bookmark button
Alert button
Mar 12, 2022
Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang

Figure 1 for The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Figure 2 for The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Figure 3 for The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Figure 4 for The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy
Viaarxiv icon

A Deep Reinforcement Learning based Approach for NOMA-based Random Access Network with Truncated Channel Inversion Power Control

Add code
Bookmark button
Alert button
Feb 22, 2022
Ziru Chen, Ran Zhang, Lin X. Cai, Yu Cheng, Yong Liu

Figure 1 for A Deep Reinforcement Learning based Approach for NOMA-based Random Access Network with Truncated Channel Inversion Power Control
Figure 2 for A Deep Reinforcement Learning based Approach for NOMA-based Random Access Network with Truncated Channel Inversion Power Control
Figure 3 for A Deep Reinforcement Learning based Approach for NOMA-based Random Access Network with Truncated Channel Inversion Power Control
Figure 4 for A Deep Reinforcement Learning based Approach for NOMA-based Random Access Network with Truncated Channel Inversion Power Control
Viaarxiv icon

Unsupervised Temporal Video Grounding with Deep Semantic Clustering

Add code
Bookmark button
Alert button
Jan 14, 2022
Daizong Liu, Xiaoye Qu, Yinzhen Wang, Xing Di, Kai Zou, Yu Cheng, Zichuan Xu, Pan Zhou

Figure 1 for Unsupervised Temporal Video Grounding with Deep Semantic Clustering
Figure 2 for Unsupervised Temporal Video Grounding with Deep Semantic Clustering
Figure 3 for Unsupervised Temporal Video Grounding with Deep Semantic Clustering
Figure 4 for Unsupervised Temporal Video Grounding with Deep Semantic Clustering
Viaarxiv icon

Memory-Guided Semantic Learning Network for Temporal Sentence Grounding

Add code
Bookmark button
Alert button
Jan 03, 2022
Daizong Liu, Xiaoye Qu, Xing Di, Yu Cheng, Zichuan Xu, Pan Zhou

Figure 1 for Memory-Guided Semantic Learning Network for Temporal Sentence Grounding
Figure 2 for Memory-Guided Semantic Learning Network for Temporal Sentence Grounding
Figure 3 for Memory-Guided Semantic Learning Network for Temporal Sentence Grounding
Figure 4 for Memory-Guided Semantic Learning Network for Temporal Sentence Grounding
Viaarxiv icon

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Add code
Bookmark button
Alert button
Nov 04, 2021
Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li

Figure 1 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 2 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 3 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 4 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Viaarxiv icon

DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models

Add code
Bookmark button
Alert button
Oct 30, 2021
Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Zhangyang Wang, Ahmed Hassan Awadallah

Figure 1 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 2 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 3 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Figure 4 for DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Viaarxiv icon