Alert button
Picture for Yanan Zheng

Yanan Zheng

Alert button

A Universal Discriminator for Zero-Shot Generalization

Add code
Bookmark button
Alert button
Nov 15, 2022
Haike Xu, Zongyu Lin, Jing Zhou, Yanan Zheng, Zhilin Yang

Figure 1 for A Universal Discriminator for Zero-Shot Generalization
Figure 2 for A Universal Discriminator for Zero-Shot Generalization
Figure 3 for A Universal Discriminator for Zero-Shot Generalization
Figure 4 for A Universal Discriminator for Zero-Shot Generalization
Viaarxiv icon

Zero-Label Prompt Selection

Add code
Bookmark button
Alert button
Nov 09, 2022
Chonghua Liao, Yanan Zheng, Zhilin Yang

Figure 1 for Zero-Label Prompt Selection
Figure 2 for Zero-Label Prompt Selection
Figure 3 for Zero-Label Prompt Selection
Figure 4 for Zero-Label Prompt Selection
Viaarxiv icon

Prompt-Based Metric Learning for Few-Shot NER

Add code
Bookmark button
Alert button
Nov 08, 2022
Yanru Chen, Yanan Zheng, Zhilin Yang

Figure 1 for Prompt-Based Metric Learning for Few-Shot NER
Figure 2 for Prompt-Based Metric Learning for Few-Shot NER
Figure 3 for Prompt-Based Metric Learning for Few-Shot NER
Figure 4 for Prompt-Based Metric Learning for Few-Shot NER
Viaarxiv icon

On the Performance of Data Compression in Clustered Fog Radio Access Networks

Add code
Bookmark button
Alert button
Jul 01, 2022
Haonan Hu, Yan Jiang, Jiliang Zhang, Yanan Zheng, Qianbin Chen, Jie Zhang

Figure 1 for On the Performance of Data Compression in Clustered Fog Radio Access Networks
Figure 2 for On the Performance of Data Compression in Clustered Fog Radio Access Networks
Figure 3 for On the Performance of Data Compression in Clustered Fog Radio Access Networks
Figure 4 for On the Performance of Data Compression in Clustered Fog Radio Access Networks
Viaarxiv icon

NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

Add code
Bookmark button
Alert button
Nov 07, 2021
Xingcheng Yao, Yanan Zheng, Xiaocong Yang, Zhilin Yang

Figure 1 for NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Figure 2 for NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Figure 3 for NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Figure 4 for NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Viaarxiv icon

FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding

Add code
Bookmark button
Alert button
Sep 27, 2021
Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Jian Li, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, Zhilin Yang

Figure 1 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 2 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 3 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 4 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Viaarxiv icon

FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning

Add code
Bookmark button
Alert button
Aug 13, 2021
Jing Zhou, Yanan Zheng, Jie Tang, Jian Li, Zhilin Yang

Figure 1 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 2 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 3 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Figure 4 for FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning
Viaarxiv icon

GPT Understands, Too

Add code
Bookmark button
Alert button
Mar 18, 2021
Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang

Figure 1 for GPT Understands, Too
Figure 2 for GPT Understands, Too
Figure 3 for GPT Understands, Too
Figure 4 for GPT Understands, Too
Viaarxiv icon

CPM: A Large-scale Generative Chinese Pre-trained Language Model

Add code
Bookmark button
Alert button
Dec 01, 2020
Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun

Figure 1 for CPM: A Large-scale Generative Chinese Pre-trained Language Model
Figure 2 for CPM: A Large-scale Generative Chinese Pre-trained Language Model
Figure 3 for CPM: A Large-scale Generative Chinese Pre-trained Language Model
Figure 4 for CPM: A Large-scale Generative Chinese Pre-trained Language Model
Viaarxiv icon