Alert button
Picture for Yichun Yin

Yichun Yin

Alert button

FPT: Improving Prompt Tuning Efficiency via Progressive Training

Add code
Bookmark button
Alert button
Nov 13, 2022
Yufei Huang, Yujia Qin, Huadong Wang, Yichun Yin, Maosong Sun, Zhiyuan Liu, Qun Liu

Figure 1 for FPT: Improving Prompt Tuning Efficiency via Progressive Training
Figure 2 for FPT: Improving Prompt Tuning Efficiency via Progressive Training
Figure 3 for FPT: Improving Prompt Tuning Efficiency via Progressive Training
Figure 4 for FPT: Improving Prompt Tuning Efficiency via Progressive Training
Viaarxiv icon

bert2BERT: Towards Reusable Pretrained Language Models

Add code
Bookmark button
Alert button
Oct 14, 2021
Cheng Chen, Yichun Yin, Lifeng Shang, Xin Jiang, Yujia Qin, Fengyu Wang, Zhi Wang, Xiao Chen, Zhiyuan Liu, Qun Liu

Figure 1 for bert2BERT: Towards Reusable Pretrained Language Models
Figure 2 for bert2BERT: Towards Reusable Pretrained Language Models
Figure 3 for bert2BERT: Towards Reusable Pretrained Language Models
Figure 4 for bert2BERT: Towards Reusable Pretrained Language Models
Viaarxiv icon

Generate & Rank: A Multi-task Framework for Math Word Problems

Add code
Bookmark button
Alert button
Sep 07, 2021
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, Qun Liu

Figure 1 for Generate & Rank: A Multi-task Framework for Math Word Problems
Figure 2 for Generate & Rank: A Multi-task Framework for Math Word Problems
Figure 3 for Generate & Rank: A Multi-task Framework for Math Word Problems
Figure 4 for Generate & Rank: A Multi-task Framework for Math Word Problems
Viaarxiv icon

Integrating Regular Expressions with Neural Networks via DFA

Add code
Bookmark button
Alert button
Sep 07, 2021
Shaobo Li, Qun Liu, Xin Jiang, Yichun Yin, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Lifeng Shang

Figure 1 for Integrating Regular Expressions with Neural Networks via DFA
Figure 2 for Integrating Regular Expressions with Neural Networks via DFA
Figure 3 for Integrating Regular Expressions with Neural Networks via DFA
Figure 4 for Integrating Regular Expressions with Neural Networks via DFA
Viaarxiv icon

AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models

Add code
Bookmark button
Alert button
Jul 29, 2021
Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, Qun Liu

Figure 1 for AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models
Figure 2 for AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models
Figure 3 for AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models
Figure 4 for AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models
Viaarxiv icon

Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation

Add code
Bookmark button
Alert button
Apr 24, 2021
Cheng Chen, Yichun Yin, Lifeng Shang, Zhi Wang, Xin Jiang, Xiao Chen, Qun Liu

Figure 1 for Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Figure 2 for Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Figure 3 for Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Figure 4 for Extract then Distill: Efficient and Effective Task-Agnostic BERT Distillation
Viaarxiv icon

LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation

Add code
Bookmark button
Alert button
Mar 11, 2021
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

Figure 1 for LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation
Figure 2 for LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation
Figure 3 for LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation
Figure 4 for LightMBERT: A Simple Yet Effective Method for Multilingual BERT Distillation
Viaarxiv icon

Improving Task-Agnostic BERT Distillation with Layer Mapping Search

Add code
Bookmark button
Alert button
Dec 11, 2020
Xiaoqi Jiao, Huating Chang, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, Qun Liu

Figure 1 for Improving Task-Agnostic BERT Distillation with Layer Mapping Search
Figure 2 for Improving Task-Agnostic BERT Distillation with Layer Mapping Search
Figure 3 for Improving Task-Agnostic BERT Distillation with Layer Mapping Search
Figure 4 for Improving Task-Agnostic BERT Distillation with Layer Mapping Search
Viaarxiv icon

TernaryBERT: Distillation-aware Ultra-low Bit BERT

Add code
Bookmark button
Alert button
Oct 10, 2020
Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, Qun Liu

Figure 1 for TernaryBERT: Distillation-aware Ultra-low Bit BERT
Figure 2 for TernaryBERT: Distillation-aware Ultra-low Bit BERT
Figure 3 for TernaryBERT: Distillation-aware Ultra-low Bit BERT
Figure 4 for TernaryBERT: Distillation-aware Ultra-low Bit BERT
Viaarxiv icon