Alert button
Picture for Yaming Yang

Yaming Yang

Alert button

Attentive Knowledge-aware Graph Convolutional Networks with Collaborative Guidance for Recommendation

Add code
Bookmark button
Alert button
Sep 05, 2021
Yankai Chen, Yaming Yang, Yujing Wang, Jing Bai, Xiangchen Song, Irwin King

Figure 1 for Attentive Knowledge-aware Graph Convolutional Networks with Collaborative Guidance for Recommendation
Figure 2 for Attentive Knowledge-aware Graph Convolutional Networks with Collaborative Guidance for Recommendation
Figure 3 for Attentive Knowledge-aware Graph Convolutional Networks with Collaborative Guidance for Recommendation
Figure 4 for Attentive Knowledge-aware Graph Convolutional Networks with Collaborative Guidance for Recommendation
Viaarxiv icon

AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing

Add code
Bookmark button
Alert button
Aug 06, 2021
Yuge Zhang, Chenqian Yan, Quanlu Zhang, Li Lyna Zhang, Yaming Yang, Xiaotian Gao, Yuqing Yang

Figure 1 for AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing
Figure 2 for AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing
Figure 3 for AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing
Figure 4 for AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing
Viaarxiv icon

Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees

Add code
Bookmark button
Alert button
Mar 07, 2021
Jiangang Bai, Yujing Wang, Yiren Chen, Yaming Yang, Jing Bai, Jing Yu, Yunhai Tong

Figure 1 for Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Figure 2 for Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Figure 3 for Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Figure 4 for Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees
Viaarxiv icon

Evolving Attention with Residual Convolutions

Add code
Bookmark button
Alert button
Feb 20, 2021
Yujing Wang, Yaming Yang, Jiangang Bai, Mingliang Zhang, Jing Bai, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong

Figure 1 for Evolving Attention with Residual Convolutions
Figure 2 for Evolving Attention with Residual Convolutions
Figure 3 for Evolving Attention with Residual Convolutions
Figure 4 for Evolving Attention with Residual Convolutions
Viaarxiv icon

How Does Supernet Help in Neural Architecture Search?

Add code
Bookmark button
Alert button
Oct 16, 2020
Yuge Zhang, Quanlu Zhang, Yaming Yang

Figure 1 for How Does Supernet Help in Neural Architecture Search?
Figure 2 for How Does Supernet Help in Neural Architecture Search?
Figure 3 for How Does Supernet Help in Neural Architecture Search?
Figure 4 for How Does Supernet Help in Neural Architecture Search?
Viaarxiv icon

AutoADR: Automatic Model Design for Ad Relevance

Add code
Bookmark button
Alert button
Oct 14, 2020
Yiren Chen, Yaming Yang, Hong Sun, Yujing Wang, Yu Xu, Wei Shen, Rong Zhou, Yunhai Tong, Jing Bai, Ruofei Zhang

Figure 1 for AutoADR: Automatic Model Design for Ad Relevance
Figure 2 for AutoADR: Automatic Model Design for Ad Relevance
Figure 3 for AutoADR: Automatic Model Design for Ad Relevance
Figure 4 for AutoADR: Automatic Model Design for Ad Relevance
Viaarxiv icon

Interpretable and Efficient Heterogeneous Graph Convolutional Network

Add code
Bookmark button
Alert button
Jun 23, 2020
Yaming Yang, Ziyu Guan, Jianxin Li, Wei Zhao, Jiangtao Cui, Quan Wang

Figure 1 for Interpretable and Efficient Heterogeneous Graph Convolutional Network
Figure 2 for Interpretable and Efficient Heterogeneous Graph Convolutional Network
Figure 3 for Interpretable and Efficient Heterogeneous Graph Convolutional Network
Figure 4 for Interpretable and Efficient Heterogeneous Graph Convolutional Network
Viaarxiv icon

Improving BERT with Self-Supervised Attention

Add code
Bookmark button
Alert button
Apr 29, 2020
Xiaoyu Kou, Yaming Yang, Yujing Wang, Ce Zhang, Yiren Chen, Yunhai Tong, Yan Zhang, Jing Bai

Figure 1 for Improving BERT with Self-Supervised Attention
Figure 2 for Improving BERT with Self-Supervised Attention
Figure 3 for Improving BERT with Self-Supervised Attention
Figure 4 for Improving BERT with Self-Supervised Attention
Viaarxiv icon

LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression

Add code
Bookmark button
Alert button
Apr 08, 2020
Yihuan Mao, Yujing Wang, Chufan Wu, Chen Zhang, Yang Wang, Yaming Yang, Quanlu Zhang, Yunhai Tong, Jing Bai

Figure 1 for LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Figure 2 for LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Figure 3 for LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Figure 4 for LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Viaarxiv icon