Picture for Runxin Xu

Runxin Xu

DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models

Add code
Feb 06, 2024
Figure 1 for DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Figure 2 for DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Figure 3 for DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Figure 4 for DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
Viaarxiv icon

A Double-Graph Based Framework for Frame Semantic Parsing

Add code
Jun 18, 2022
Figure 1 for A Double-Graph Based Framework for Frame Semantic Parsing
Figure 2 for A Double-Graph Based Framework for Frame Semantic Parsing
Figure 3 for A Double-Graph Based Framework for Frame Semantic Parsing
Figure 4 for A Double-Graph Based Framework for Frame Semantic Parsing
Viaarxiv icon

A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction

Add code
Apr 30, 2022
Figure 1 for A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
Figure 2 for A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
Figure 3 for A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
Figure 4 for A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction
Viaarxiv icon

ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs

Add code
Apr 20, 2022
Figure 1 for ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Figure 2 for ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Figure 3 for ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Figure 4 for ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
Viaarxiv icon

On Effectively Learning of Knowledge in Continual Pre-training

Add code
Apr 17, 2022
Figure 1 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 2 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 3 for On Effectively Learning of Knowledge in Continual Pre-training
Figure 4 for On Effectively Learning of Knowledge in Continual Pre-training
Viaarxiv icon

Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency

Add code
Apr 06, 2022
Figure 1 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 2 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 3 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Figure 4 for Probing Structured Pruning on Multilingual Pre-trained Models: Settings, Algorithms, and Efficiency
Viaarxiv icon

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

Add code
Apr 01, 2022
Figure 1 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 2 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 3 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Figure 4 for Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning
Viaarxiv icon

Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation

Add code
Mar 11, 2022
Figure 1 for Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
Figure 2 for Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
Figure 3 for Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
Figure 4 for Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
Viaarxiv icon

From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression

Add code
Dec 14, 2021
Figure 1 for From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Figure 2 for From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Figure 3 for From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Figure 4 for From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Viaarxiv icon

An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling

Add code
Sep 27, 2021
Figure 1 for An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling
Figure 2 for An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling
Figure 3 for An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling
Figure 4 for An Enhanced Span-based Decomposition Method for Few-Shot Sequence Labeling
Viaarxiv icon