Alert button
Picture for Xin Jiang

Xin Jiang

Alert button

Towards Efficient Post-training Quantization of Pre-trained Language Models

Add code
Bookmark button
Alert button
Sep 30, 2021
Haoli Bai, Lu Hou, Lifeng Shang, Xin Jiang, Irwin King, Michael R. Lyu

Figure 1 for Towards Efficient Post-training Quantization of Pre-trained Language Models
Figure 2 for Towards Efficient Post-training Quantization of Pre-trained Language Models
Figure 3 for Towards Efficient Post-training Quantization of Pre-trained Language Models
Figure 4 for Towards Efficient Post-training Quantization of Pre-trained Language Models
Viaarxiv icon

DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling

Add code
Bookmark button
Alert button
Sep 22, 2021
Baojun Wang, Zhao Zhang, Kun Xu, Guang-Yuan Hao, Yuyang Zhang, Lifeng Shang, Linlin Li, Xiao Chen, Xin Jiang, Qun Liu

Figure 1 for DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling
Figure 2 for DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling
Figure 3 for DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling
Figure 4 for DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling
Viaarxiv icon

Improving Unsupervised Question Answering via Summarization-Informed Question Generation

Add code
Bookmark button
Alert button
Sep 16, 2021
Chenyang Lyu, Lifeng Shang, Yvette Graham, Jennifer Foster, Xin Jiang, Qun Liu

Figure 1 for Improving Unsupervised Question Answering via Summarization-Informed Question Generation
Figure 2 for Improving Unsupervised Question Answering via Summarization-Informed Question Generation
Figure 3 for Improving Unsupervised Question Answering via Summarization-Informed Question Generation
Figure 4 for Improving Unsupervised Question Answering via Summarization-Informed Question Generation
Viaarxiv icon

CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems

Add code
Bookmark button
Alert button
Sep 14, 2021
Fei Mi, Yitong Li, Yasheng Wang, Xin Jiang, Qun Liu

Figure 1 for CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems
Figure 2 for CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems
Figure 3 for CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems
Figure 4 for CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented Dialog Systems
Viaarxiv icon

UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation

Add code
Bookmark button
Alert button
Sep 13, 2021
Zhengkun Zhang, Xiaojun Meng, Yasheng Wang, Xin Jiang, Qun Liu, Zhenglu Yang

Figure 1 for UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation
Figure 2 for UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation
Figure 3 for UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation
Figure 4 for UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation
Viaarxiv icon

CINS: Comprehensive Instruction for Few-shot Learning in Task-orientedDialog Systems

Add code
Bookmark button
Alert button
Sep 10, 2021
Fei Mi, Yitong Li, Yasheng Wang, Xin Jiang, Qun Liu

Figure 1 for CINS: Comprehensive Instruction for Few-shot Learning in Task-orientedDialog Systems
Figure 2 for CINS: Comprehensive Instruction for Few-shot Learning in Task-orientedDialog Systems
Figure 3 for CINS: Comprehensive Instruction for Few-shot Learning in Task-orientedDialog Systems
Figure 4 for CINS: Comprehensive Instruction for Few-shot Learning in Task-orientedDialog Systems
Viaarxiv icon

SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation

Add code
Bookmark button
Alert button
Sep 09, 2021
Xin Wang, Yasheng Wang, Fei Mi, Pingyi Zhou, Yao Wan, Xiao Liu, Li Li, Hao Wu, Jin Liu, Xin Jiang

Figure 1 for SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Figure 2 for SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Figure 3 for SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Figure 4 for SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Viaarxiv icon

NumGPT: Improving Numeracy Ability of Generative Pre-trained Models

Add code
Bookmark button
Alert button
Sep 07, 2021
Zhihua Jin, Xin Jiang, Xingbo Wang, Qun Liu, Yong Wang, Xiaozhe Ren, Huamin Qu

Figure 1 for NumGPT: Improving Numeracy Ability of Generative Pre-trained Models
Figure 2 for NumGPT: Improving Numeracy Ability of Generative Pre-trained Models
Figure 3 for NumGPT: Improving Numeracy Ability of Generative Pre-trained Models
Figure 4 for NumGPT: Improving Numeracy Ability of Generative Pre-trained Models
Viaarxiv icon

Generate & Rank: A Multi-task Framework for Math Word Problems

Add code
Bookmark button
Alert button
Sep 07, 2021
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, Qun Liu

Figure 1 for Generate & Rank: A Multi-task Framework for Math Word Problems
Figure 2 for Generate & Rank: A Multi-task Framework for Math Word Problems
Figure 3 for Generate & Rank: A Multi-task Framework for Math Word Problems
Figure 4 for Generate & Rank: A Multi-task Framework for Math Word Problems
Viaarxiv icon

Integrating Regular Expressions with Neural Networks via DFA

Add code
Bookmark button
Alert button
Sep 07, 2021
Shaobo Li, Qun Liu, Xin Jiang, Yichun Yin, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Lifeng Shang

Figure 1 for Integrating Regular Expressions with Neural Networks via DFA
Figure 2 for Integrating Regular Expressions with Neural Networks via DFA
Figure 3 for Integrating Regular Expressions with Neural Networks via DFA
Figure 4 for Integrating Regular Expressions with Neural Networks via DFA
Viaarxiv icon