Picture for Feijun Jiang

Feijun Jiang

CRPE: Expanding The Reasoning Capability of Large Language Model for Code Generation

Add code
May 15, 2025
Viaarxiv icon

PMMT: Preference Alignment in Multilingual Machine Translation via LLM Distillation

Add code
Oct 15, 2024
Viaarxiv icon

Building Decision Making Models Through Language Model Regime

Add code
Aug 12, 2024
Viaarxiv icon

CO3: Low-resource Contrastive Co-training for Generative Conversational Query Rewrite

Add code
Mar 18, 2024
Viaarxiv icon

A unified multichannel far-field speech recognition system: combining neural beamforming with attention based end-to-end model

Add code
Jan 05, 2024
Viaarxiv icon

Improving Audio-Visual Speech Recognition by Lip-Subword Correlation Based Visual Pre-training and Cross-Modal Fusion Encoder

Add code
Aug 14, 2023
Viaarxiv icon

Network Pruning Spaces

Add code
Apr 19, 2023
Viaarxiv icon

McQueen: a Benchmark for Multimodal Conversational Query Rewrite

Add code
Oct 23, 2022
Viaarxiv icon

Bootstrap Latent Representations for Multi-modal Recommendation

Add code
Jul 13, 2022
Figure 1 for Bootstrap Latent Representations for Multi-modal Recommendation
Figure 2 for Bootstrap Latent Representations for Multi-modal Recommendation
Figure 3 for Bootstrap Latent Representations for Multi-modal Recommendation
Figure 4 for Bootstrap Latent Representations for Multi-modal Recommendation
Viaarxiv icon

iEmoTTS: Toward Robust Cross-Speaker Emotion Transfer and Control for Speech Synthesis based on Disentanglement between Prosody and Timbre

Add code
Jun 29, 2022
Figure 1 for iEmoTTS: Toward Robust Cross-Speaker Emotion Transfer and Control for Speech Synthesis based on Disentanglement between Prosody and Timbre
Figure 2 for iEmoTTS: Toward Robust Cross-Speaker Emotion Transfer and Control for Speech Synthesis based on Disentanglement between Prosody and Timbre
Figure 3 for iEmoTTS: Toward Robust Cross-Speaker Emotion Transfer and Control for Speech Synthesis based on Disentanglement between Prosody and Timbre
Figure 4 for iEmoTTS: Toward Robust Cross-Speaker Emotion Transfer and Control for Speech Synthesis based on Disentanglement between Prosody and Timbre
Viaarxiv icon