Picture for Linjun Li

Linjun Li

EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration

Add code
Jun 20, 2024
Figure 1 for EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration
Figure 2 for EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration
Figure 3 for EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration
Figure 4 for EAGER: Two-Stream Generative Recommender with Behavior-Semantic Collaboration
Viaarxiv icon

TransFace: Unit-Based Audio-Visual Speech Synthesizer for Talking Head Translation

Add code
Dec 23, 2023
Viaarxiv icon

3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding

Add code
Jul 25, 2023
Figure 1 for 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding
Figure 2 for 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding
Figure 3 for 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding
Figure 4 for 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding
Viaarxiv icon

Distilling Coarse-to-Fine Semantic Matching Knowledge for Weakly Supervised 3D Visual Grounding

Add code
Jul 18, 2023
Figure 1 for Distilling Coarse-to-Fine Semantic Matching Knowledge for Weakly Supervised 3D Visual Grounding
Figure 2 for Distilling Coarse-to-Fine Semantic Matching Knowledge for Weakly Supervised 3D Visual Grounding
Figure 3 for Distilling Coarse-to-Fine Semantic Matching Knowledge for Weakly Supervised 3D Visual Grounding
Figure 4 for Distilling Coarse-to-Fine Semantic Matching Knowledge for Weakly Supervised 3D Visual Grounding
Viaarxiv icon

OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality Alignment

Add code
Jun 10, 2023
Figure 1 for OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality Alignment
Figure 2 for OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality Alignment
Figure 3 for OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality Alignment
Figure 4 for OpenSR: Open-Modality Speech Recognition via Maintaining Multi-Modality Alignment
Viaarxiv icon

AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation

Add code
May 24, 2023
Figure 1 for AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation
Figure 2 for AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation
Figure 3 for AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation
Figure 4 for AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation
Viaarxiv icon

Connecting Multi-modal Contrastive Representations

Add code
May 22, 2023
Figure 1 for Connecting Multi-modal Contrastive Representations
Figure 2 for Connecting Multi-modal Contrastive Representations
Figure 3 for Connecting Multi-modal Contrastive Representations
Figure 4 for Connecting Multi-modal Contrastive Representations
Viaarxiv icon

MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition

Add code
Mar 09, 2023
Figure 1 for MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition
Figure 2 for MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition
Figure 3 for MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition
Figure 4 for MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup for Visual Speech Translation and Recognition
Viaarxiv icon

Motion Planning Transformers: One Model to Plan Them All

Add code
Jun 05, 2021
Figure 1 for Motion Planning Transformers: One Model to Plan Them All
Figure 2 for Motion Planning Transformers: One Model to Plan Them All
Figure 3 for Motion Planning Transformers: One Model to Plan Them All
Figure 4 for Motion Planning Transformers: One Model to Plan Them All
Viaarxiv icon

MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints

Add code
Jan 17, 2021
Figure 1 for MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints
Figure 2 for MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints
Figure 3 for MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints
Figure 4 for MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning under Kinodynamic Constraints
Viaarxiv icon