Alert button
Picture for Runxin Xu

Runxin Xu

Alert button

Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models

Add code
Bookmark button
Alert button
Mar 04, 2024
Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, Qi Liu

Figure 1 for Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models
Figure 2 for Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models
Figure 3 for Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models
Figure 4 for Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models
Viaarxiv icon

DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

Add code
Bookmark button
Alert button
Feb 06, 2024
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li, Y. Wu, Daya Guo

Viaarxiv icon

A Double-Graph Based Framework for Frame Semantic Parsing

Add code
Bookmark button
Alert button
Jun 18, 2022
Ce Zheng, Xudong Chen, Runxin Xu, Baobao Chang

Figure 1 for A Double-Graph Based Framework for Frame Semantic Parsing
Figure 2 for A Double-Graph Based Framework for Frame Semantic Parsing
Figure 3 for A Double-Graph Based Framework for Frame Semantic Parsing
Figure 4 for A Double-Graph Based Framework for Frame Semantic Parsing
Viaarxiv icon

A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction

Add code
Bookmark button
Alert button
Apr 30, 2022
Runxin Xu, Peiyi Wang, Tianyu Liu, Shuang Zeng, Baobao Chang, Zhifang Sui

Figure 1 for A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
Figure 2 for A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
Figure 3 for A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
Figure 4 for A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
Viaarxiv icon

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

Add code
Bookmark button
Alert button
Apr 20, 2022
Liang Chen, Peiyi Wang, Runxin Xu, Tianyu Liu, Zhifang Sui, Baobao Chang

Figure 1 for ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Figure 2 for ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Figure 3 for ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Figure 4 for ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Viaarxiv icon

On Effectively Learning of Knowledge in Continual Pre-training

Add code
Bookmark button
Alert button
Apr 17, 2022
Cunxiang Wang, Fuli Luo, Yanyang Li, Runxin Xu, Fei Huang, Yue Zhang

Figure 1 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 2 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 3 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 4 for On Effectively Learning of Knowledge in Continual Pre-training
Viaarxiv icon

Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency

Add code
Bookmark button
Alert button
Apr 06, 2022
Yanyang Li, Fuli Luo, Runxin Xu, Songfang Huang, Fei Huang, Liwei Wang

Figure 1 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 2 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 3 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 4 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Viaarxiv icon

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

Add code
Bookmark button
Alert button
Apr 01, 2022
Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, Jun Huang

Figure 1 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 2 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 3 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 4 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Viaarxiv icon

Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation

Add code
Bookmark button
Alert button
Mar 11, 2022
Liang Chen, Runxin Xu, Baobao Chang

Figure 1 for Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
Figure 2 for Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
Figure 3 for Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
Figure 4 for Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
Viaarxiv icon