Picture for Jinming Zhao

Jinming Zhao

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning

Add code
Apr 18, 2023
Figure 1 for MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Figure 2 for MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Figure 3 for MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Figure 4 for MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning
Viaarxiv icon

Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities

Add code
Oct 27, 2022
Figure 1 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 2 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 3 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Figure 4 for Exploiting modality-invariant feature for robust multimodal emotion recognition with missing modalities
Viaarxiv icon

Self-supervised Rewiring of Pre-trained Speech Encoders: Towards Faster Fine-tuning with Less Labels in Speech Processing

Add code
Oct 24, 2022
Figure 1 for Self-supervised Rewiring of Pre-trained Speech Encoders: Towards Faster Fine-tuning with Less Labels in Speech Processing
Figure 2 for Self-supervised Rewiring of Pre-trained Speech Encoders: Towards Faster Fine-tuning with Less Labels in Speech Processing
Figure 3 for Self-supervised Rewiring of Pre-trained Speech Encoders: Towards Faster Fine-tuning with Less Labels in Speech Processing
Figure 4 for Self-supervised Rewiring of Pre-trained Speech Encoders: Towards Faster Fine-tuning with Less Labels in Speech Processing
Viaarxiv icon

Towards Relation Extraction From Speech

Add code
Oct 17, 2022
Figure 1 for Towards Relation Extraction From Speech
Figure 2 for Towards Relation Extraction From Speech
Figure 3 for Towards Relation Extraction From Speech
Figure 4 for Towards Relation Extraction From Speech
Viaarxiv icon

RedApt: An Adaptor for wav2vec 2 Encoding \\ Faster and Smaller Speech Translation without Quality Compromise

Add code
Oct 16, 2022
Figure 1 for RedApt: An Adaptor for wav2vec 2 Encoding \\ Faster and Smaller Speech Translation without Quality Compromise
Figure 2 for RedApt: An Adaptor for wav2vec 2 Encoding \\ Faster and Smaller Speech Translation without Quality Compromise
Figure 3 for RedApt: An Adaptor for wav2vec 2 Encoding \\ Faster and Smaller Speech Translation without Quality Compromise
Figure 4 for RedApt: An Adaptor for wav2vec 2 Encoding \\ Faster and Smaller Speech Translation without Quality Compromise
Viaarxiv icon

Generating Synthetic Speech from SpokenVocab for Speech Translation

Add code
Oct 15, 2022
Figure 1 for Generating Synthetic Speech from SpokenVocab for Speech Translation
Figure 2 for Generating Synthetic Speech from SpokenVocab for Speech Translation
Figure 3 for Generating Synthetic Speech from SpokenVocab for Speech Translation
Figure 4 for Generating Synthetic Speech from SpokenVocab for Speech Translation
Viaarxiv icon

M-Adapter: Modality Adaptation for End-to-End Speech-to-Text Translation

Add code
Jul 03, 2022
Figure 1 for M-Adapter: Modality Adaptation for End-to-End Speech-to-Text Translation
Figure 2 for M-Adapter: Modality Adaptation for End-to-End Speech-to-Text Translation
Figure 3 for M-Adapter: Modality Adaptation for End-to-End Speech-to-Text Translation
Figure 4 for M-Adapter: Modality Adaptation for End-to-End Speech-to-Text Translation
Viaarxiv icon

M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database

Add code
May 09, 2022
Figure 1 for M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database
Figure 2 for M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database
Figure 3 for M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database
Figure 4 for M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database
Viaarxiv icon

MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition

Add code
Oct 27, 2021
Figure 1 for MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition
Figure 2 for MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition
Figure 3 for MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition
Figure 4 for MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition
Viaarxiv icon

It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpretation Data

Add code
Oct 11, 2021
Figure 1 for It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpretation Data
Figure 2 for It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpretation Data
Figure 3 for It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpretation Data
Figure 4 for It is Not as Good as You Think! Evaluating Simultaneous Machine Translation on Interpretation Data
Viaarxiv icon