Alert button
Picture for Zhilin Yang

Zhilin Yang

Alert button

CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X

Add code
Bookmark button
Alert button
Mar 30, 2023
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, Jie Tang

Figure 1 for CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X
Figure 2 for CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X
Figure 3 for CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X
Figure 4 for CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X
Viaarxiv icon

Learning to Detect Noisy Labels Using Model-Based Features

Add code
Bookmark button
Alert button
Dec 28, 2022
Zhihao Wang, Zongyu Lin, Peiqi Liu, Guidong ZHeng, Junjie Wen, Xianxin Chen, Yujun Chen, Zhilin Yang

Figure 1 for Learning to Detect Noisy Labels Using Model-Based Features
Figure 2 for Learning to Detect Noisy Labels Using Model-Based Features
Figure 3 for Learning to Detect Noisy Labels Using Model-Based Features
Figure 4 for Learning to Detect Noisy Labels Using Model-Based Features
Viaarxiv icon

A Universal Discriminator for Zero-Shot Generalization

Add code
Bookmark button
Alert button
Nov 15, 2022
Haike Xu, Zongyu Lin, Jing Zhou, Yanan Zheng, Zhilin Yang

Figure 1 for A Universal Discriminator for Zero-Shot Generalization
Figure 2 for A Universal Discriminator for Zero-Shot Generalization
Figure 3 for A Universal Discriminator for Zero-Shot Generalization
Figure 4 for A Universal Discriminator for Zero-Shot Generalization
Viaarxiv icon

Zero-Label Prompt Selection

Add code
Bookmark button
Alert button
Nov 09, 2022
Chonghua Liao, Yanan Zheng, Zhilin Yang

Figure 1 for Zero-Label Prompt Selection
Figure 2 for Zero-Label Prompt Selection
Figure 3 for Zero-Label Prompt Selection
Figure 4 for Zero-Label Prompt Selection
Viaarxiv icon

Prompt-Based Metric Learning for Few-Shot NER

Add code
Bookmark button
Alert button
Nov 08, 2022
Yanru Chen, Yanan Zheng, Zhilin Yang

Figure 1 for Prompt-Based Metric Learning for Few-Shot NER
Figure 2 for Prompt-Based Metric Learning for Few-Shot NER
Figure 3 for Prompt-Based Metric Learning for Few-Shot NER
Figure 4 for Prompt-Based Metric Learning for Few-Shot NER
Viaarxiv icon

GPS: Genetic Prompt Search for Efficient Few-shot Learning

Add code
Bookmark button
Alert button
Oct 31, 2022
Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, Zhilin Yang

Figure 1 for GPS: Genetic Prompt Search for Efficient Few-shot Learning
Figure 2 for GPS: Genetic Prompt Search for Efficient Few-shot Learning
Figure 3 for GPS: Genetic Prompt Search for Efficient Few-shot Learning
Figure 4 for GPS: Genetic Prompt Search for Efficient Few-shot Learning
Viaarxiv icon

ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization

Add code
Bookmark button
Alert button
Jan 18, 2022
Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, Zhilin Yang

Figure 1 for ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
Figure 2 for ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
Figure 3 for ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
Figure 4 for ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
Viaarxiv icon

NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

Add code
Bookmark button
Alert button
Nov 07, 2021
Xingcheng Yao, Yanan Zheng, Xiaocong Yang, Zhilin Yang

Figure 1 for NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Figure 2 for NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Figure 3 for NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Figure 4 for NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
Viaarxiv icon

P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks

Add code
Bookmark button
Alert button
Oct 18, 2021
Xiao Liu, Kaixuan Ji, Yicheng Fu, Zhengxiao Du, Zhilin Yang, Jie Tang

Figure 1 for P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Figure 2 for P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Figure 3 for P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Figure 4 for P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Viaarxiv icon

FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding

Add code
Bookmark button
Alert button
Sep 27, 2021
Yanan Zheng, Jing Zhou, Yujie Qian, Ming Ding, Jian Li, Ruslan Salakhutdinov, Jie Tang, Sebastian Ruder, Zhilin Yang

Figure 1 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 2 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 3 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 4 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Viaarxiv icon