Alert button
Picture for Yao Wan

Yao Wan

Alert button

Compilable Neural Code Generation with Compiler Feedback

Add code
Bookmark button
Alert button
Mar 10, 2022
Xin Wang, Yasheng Wang, Yao Wan, Fei Mi, Yitong Li, Pingyi Zhou, Jin Liu, Hao Wu, Xin Jiang, Qun Liu

Figure 1 for Compilable Neural Code Generation with Compiler Feedback
Figure 2 for Compilable Neural Code Generation with Compiler Feedback
Figure 3 for Compilable Neural Code Generation with Compiler Feedback
Figure 4 for Compilable Neural Code Generation with Compiler Feedback
Viaarxiv icon

Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots

Add code
Bookmark button
Alert button
Mar 01, 2022
Wenting Zhao, Ye Liu, Yao Wan, Philip S. Yu

Figure 1 for Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots
Figure 2 for Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots
Figure 3 for Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots
Figure 4 for Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots
Viaarxiv icon

What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code

Add code
Bookmark button
Alert button
Feb 14, 2022
Yao Wan, Wei Zhao, Hongyu Zhang, Yulei Sui, Guandong Xu, Hai Jin

Figure 1 for What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code
Figure 2 for What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code
Figure 3 for What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code
Figure 4 for What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code
Viaarxiv icon

Cross-Language Binary-Source Code Matching with Intermediate Representations

Add code
Bookmark button
Alert button
Jan 19, 2022
Yi Gui, Yao Wan, Hongyu Zhang, Huifang Huang, Yulei Sui, Guandong Xu, Zhiyuan Shao, Hai Jin

Figure 1 for Cross-Language Binary-Source Code Matching with Intermediate Representations
Figure 2 for Cross-Language Binary-Source Code Matching with Intermediate Representations
Figure 3 for Cross-Language Binary-Source Code Matching with Intermediate Representations
Figure 4 for Cross-Language Binary-Source Code Matching with Intermediate Representations
Viaarxiv icon

DANets: Deep Abstract Networks for Tabular Data Classification and Regression

Add code
Bookmark button
Alert button
Dec 24, 2021
Jintai Chen, Kuanlun Liao, Yao Wan, Danny Z. Chen, Jian Wu

Figure 1 for DANets: Deep Abstract Networks for Tabular Data Classification and Regression
Figure 2 for DANets: Deep Abstract Networks for Tabular Data Classification and Regression
Figure 3 for DANets: Deep Abstract Networks for Tabular Data Classification and Regression
Figure 4 for DANets: Deep Abstract Networks for Tabular Data Classification and Regression
Viaarxiv icon

FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization

Add code
Bookmark button
Alert button
Nov 29, 2021
Dezhong Yao, Wanning Pan, Yao Wan, Hai Jin, Lichao Sun

Figure 1 for FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization
Figure 2 for FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization
Figure 3 for FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization
Figure 4 for FedHM: Efficient Federated Learning for Heterogeneous Models via Low-rank Factorization
Viaarxiv icon

HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization

Add code
Bookmark button
Alert button
Oct 19, 2021
Ye Liu, Jian-Guo Zhang, Yao Wan, Congying Xia, Lifang He, Philip S. Yu

Figure 1 for HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization
Figure 2 for HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization
Figure 3 for HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization
Figure 4 for HETFORMER: Heterogeneous Transformer with Sparse Attention for Long-Text Extractive Summarization
Viaarxiv icon

SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation

Add code
Bookmark button
Alert button
Sep 09, 2021
Xin Wang, Yasheng Wang, Fei Mi, Pingyi Zhou, Yao Wan, Xiao Liu, Li Li, Hao Wu, Jin Liu, Xin Jiang

Figure 1 for SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Figure 2 for SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Figure 3 for SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Figure 4 for SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Viaarxiv icon