Picture for Chengqing Zong

Chengqing Zong

National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China, School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China

BLSP-Emo: Towards Empathetic Large Speech-Language Models

Add code
Jun 06, 2024
Viaarxiv icon

Self-Modifying State Modeling for Simultaneous Machine Translation

Add code
Jun 04, 2024
Viaarxiv icon

X-Instruction: Aligning Language Model in Low-resource Languages with Self-curated Cross-lingual Instructions

Add code
May 30, 2024
Viaarxiv icon

Navigating Brain Language Representations: A Comparative Analysis of Neural Language Models and Psychologically Plausible Models

Add code
Apr 30, 2024
Figure 1 for Navigating Brain Language Representations: A Comparative Analysis of Neural Language Models and Psychologically Plausible Models
Figure 2 for Navigating Brain Language Representations: A Comparative Analysis of Neural Language Models and Psychologically Plausible Models
Figure 3 for Navigating Brain Language Representations: A Comparative Analysis of Neural Language Models and Psychologically Plausible Models
Figure 4 for Navigating Brain Language Representations: A Comparative Analysis of Neural Language Models and Psychologically Plausible Models
Viaarxiv icon

F-MALLOC: Feed-forward Memory Allocation for Continual Learning in Neural Machine Translation

Add code
Apr 07, 2024
Figure 1 for F-MALLOC: Feed-forward Memory Allocation for Continual Learning in Neural Machine Translation
Figure 2 for F-MALLOC: Feed-forward Memory Allocation for Continual Learning in Neural Machine Translation
Figure 3 for F-MALLOC: Feed-forward Memory Allocation for Continual Learning in Neural Machine Translation
Figure 4 for F-MALLOC: Feed-forward Memory Allocation for Continual Learning in Neural Machine Translation
Viaarxiv icon

MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities

Add code
Apr 02, 2024
Figure 1 for MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities
Figure 2 for MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities
Figure 3 for MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities
Figure 4 for MapGuide: A Simple yet Effective Method to Reconstruct Continuous Language from Brain Activities
Viaarxiv icon

Computational Models to Study Language Processing in the Human Brain: A Survey

Add code
Mar 20, 2024
Figure 1 for Computational Models to Study Language Processing in the Human Brain: A Survey
Figure 2 for Computational Models to Study Language Processing in the Human Brain: A Survey
Figure 3 for Computational Models to Study Language Processing in the Human Brain: A Survey
Figure 4 for Computational Models to Study Language Processing in the Human Brain: A Survey
Viaarxiv icon

MulCogBench: A Multi-modal Cognitive Benchmark Dataset for Evaluating Chinese and English Computational Language Models

Add code
Mar 02, 2024
Figure 1 for MulCogBench: A Multi-modal Cognitive Benchmark Dataset for Evaluating Chinese and English Computational Language Models
Figure 2 for MulCogBench: A Multi-modal Cognitive Benchmark Dataset for Evaluating Chinese and English Computational Language Models
Figure 3 for MulCogBench: A Multi-modal Cognitive Benchmark Dataset for Evaluating Chinese and English Computational Language Models
Figure 4 for MulCogBench: A Multi-modal Cognitive Benchmark Dataset for Evaluating Chinese and English Computational Language Models
Viaarxiv icon

MoDS: Model-oriented Data Selection for Instruction Tuning

Add code
Nov 27, 2023
Figure 1 for MoDS: Model-oriented Data Selection for Instruction Tuning
Figure 2 for MoDS: Model-oriented Data Selection for Instruction Tuning
Figure 3 for MoDS: Model-oriented Data Selection for Instruction Tuning
Figure 4 for MoDS: Model-oriented Data Selection for Instruction Tuning
Viaarxiv icon

Align after Pre-train: Improving Multilingual Generative Models with Cross-lingual Alignment

Add code
Nov 14, 2023
Figure 1 for Align after Pre-train: Improving Multilingual Generative Models with Cross-lingual Alignment
Figure 2 for Align after Pre-train: Improving Multilingual Generative Models with Cross-lingual Alignment
Figure 3 for Align after Pre-train: Improving Multilingual Generative Models with Cross-lingual Alignment
Figure 4 for Align after Pre-train: Improving Multilingual Generative Models with Cross-lingual Alignment
Viaarxiv icon