Picture for Dongchan Min

Dongchan Min

Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation

Add code
Apr 02, 2024
Figure 1 for Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation
Figure 2 for Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation
Figure 3 for Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation
Figure 4 for Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation
Viaarxiv icon

Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization

Add code
May 30, 2023
Figure 1 for Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization
Figure 2 for Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization
Figure 3 for Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization
Figure 4 for Context-Preserving Two-Stage Video Domain Translation for Portrait Stylization
Viaarxiv icon

StyleLipSync: Style-based Personalized Lip-sync Video Generation

Add code
Apr 30, 2023
Figure 1 for StyleLipSync: Style-based Personalized Lip-sync Video Generation
Figure 2 for StyleLipSync: Style-based Personalized Lip-sync Video Generation
Figure 3 for StyleLipSync: Style-based Personalized Lip-sync Video Generation
Figure 4 for StyleLipSync: Style-based Personalized Lip-sync Video Generation
Viaarxiv icon

Any-speaker Adaptive Text-To-Speech Synthesis with Diffusion Models

Add code
Nov 17, 2022
Figure 1 for Any-speaker Adaptive Text-To-Speech Synthesis with Diffusion Models
Figure 2 for Any-speaker Adaptive Text-To-Speech Synthesis with Diffusion Models
Figure 3 for Any-speaker Adaptive Text-To-Speech Synthesis with Diffusion Models
Figure 4 for Any-speaker Adaptive Text-To-Speech Synthesis with Diffusion Models
Viaarxiv icon

StyleTalker: One-shot Style-based Audio-driven Talking Head Video Generation

Add code
Aug 23, 2022
Figure 1 for StyleTalker: One-shot Style-based Audio-driven Talking Head Video Generation
Figure 2 for StyleTalker: One-shot Style-based Audio-driven Talking Head Video Generation
Figure 3 for StyleTalker: One-shot Style-based Audio-driven Talking Head Video Generation
Figure 4 for StyleTalker: One-shot Style-based Audio-driven Talking Head Video Generation
Viaarxiv icon

Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation

Add code
Jun 20, 2022
Figure 1 for Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation
Figure 2 for Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation
Figure 3 for Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation
Figure 4 for Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation
Viaarxiv icon

Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation

Add code
Jun 16, 2021
Figure 1 for Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
Figure 2 for Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
Figure 3 for Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
Figure 4 for Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
Viaarxiv icon