Alert button
Picture for Shijin Wang

Shijin Wang

Alert button

Generative Input: Towards Next-Generation Input Methods Paradigm

Nov 02, 2023
Keyu Ding, Yongcan Wang, Zihang Xu, Zhenzhen Jia, Shijin Wang, Cong Liu, Enhong Chen

Figure 1 for Generative Input: Towards Next-Generation Input Methods Paradigm
Figure 2 for Generative Input: Towards Next-Generation Input Methods Paradigm
Figure 3 for Generative Input: Towards Next-Generation Input Methods Paradigm
Figure 4 for Generative Input: Towards Next-Generation Input Methods Paradigm
Viaarxiv icon

IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning

Jun 27, 2023
Zihang Xu, Ziqing Yang, Yiming Cui, Shijin Wang

Figure 1 for IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning
Figure 2 for IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning
Figure 3 for IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning
Figure 4 for IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning
Viaarxiv icon

JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving

Jun 19, 2023
Wayne Xin Zhao, Kun Zhou, Beichen Zhang, Zheng Gong, Zhipeng Chen, Yuanhang Zhou, Ji-Rong Wen, Jing Sha, Shijin Wang, Cong Liu, Guoping Hu

Figure 1 for JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving
Figure 2 for JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving
Figure 3 for JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving
Figure 4 for JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for Multi-task Mathematical Problem Solving
Viaarxiv icon

Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective

Jun 18, 2023
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, Enhong Chen

Figure 1 for Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
Figure 2 for Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
Figure 3 for Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
Figure 4 for Efficiently Measuring the Cognitive Ability of LLMs: An Adaptive Testing Perspective
Viaarxiv icon

Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning

Jun 04, 2023
Beichen Zhang, Kun Zhou, Xilin Wei, Wayne Xin Zhao, Jing Sha, Shijin Wang, Ji-Rong Wen

Figure 1 for Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning
Figure 2 for Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning
Figure 3 for Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning
Figure 4 for Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning
Viaarxiv icon

CSED: A Chinese Semantic Error Diagnosis Corpus

May 09, 2023
Bo Sun, Baoxin Wang, Yixuan Wang, Wanxiang Che, Dayong Wu, Shijin Wang, Ting Liu

Figure 1 for CSED: A Chinese Semantic Error Diagnosis Corpus
Figure 2 for CSED: A Chinese Semantic Error Diagnosis Corpus
Figure 3 for CSED: A Chinese Semantic Error Diagnosis Corpus
Figure 4 for CSED: A Chinese Semantic Error Diagnosis Corpus
Viaarxiv icon

MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model

Apr 03, 2023
Xin Yao, Ziqing Yang, Yiming Cui, Shijin Wang

Figure 1 for MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model
Figure 2 for MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model
Figure 3 for MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model
Figure 4 for MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model
Viaarxiv icon

Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training

Jan 18, 2023
Yuting Ning, Zhenya Huang, Xin Lin, Enhong Chen, Shiwei Tong, Zheng Gong, Shijin Wang

Figure 1 for Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training
Figure 2 for Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training
Figure 3 for Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training
Figure 4 for Towards a Holistic Understanding of Mathematical Questions with Contrastive Pre-training
Viaarxiv icon

Gradient-based Intra-attention Pruning on Pre-trained Language Models

Dec 15, 2022
Ziqing Yang, Yiming Cui, Xin Yao, Shijin Wang

Figure 1 for Gradient-based Intra-attention Pruning on Pre-trained Language Models
Figure 2 for Gradient-based Intra-attention Pruning on Pre-trained Language Models
Figure 3 for Gradient-based Intra-attention Pruning on Pre-trained Language Models
Figure 4 for Gradient-based Intra-attention Pruning on Pre-trained Language Models
Viaarxiv icon

LERT: A Linguistically-motivated Pre-trained Language Model

Nov 10, 2022
Yiming Cui, Wanxiang Che, Shijin Wang, Ting Liu

Figure 1 for LERT: A Linguistically-motivated Pre-trained Language Model
Figure 2 for LERT: A Linguistically-motivated Pre-trained Language Model
Figure 3 for LERT: A Linguistically-motivated Pre-trained Language Model
Figure 4 for LERT: A Linguistically-motivated Pre-trained Language Model
Viaarxiv icon