Picture for Karren Yang

Karren Yang

Hypernetworks for Personalizing ASR to Atypical Speech

Add code
Jun 07, 2024
Viaarxiv icon

FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline

Add code
Dec 20, 2023
Figure 1 for FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline
Figure 2 for FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline
Figure 3 for FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline
Figure 4 for FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline
Viaarxiv icon

Novel-View Acoustic Synthesis from 3D Reconstructed Rooms

Add code
Oct 23, 2023
Figure 1 for Novel-View Acoustic Synthesis from 3D Reconstructed Rooms
Figure 2 for Novel-View Acoustic Synthesis from 3D Reconstructed Rooms
Figure 3 for Novel-View Acoustic Synthesis from 3D Reconstructed Rooms
Viaarxiv icon

Corpus Synthesis for Zero-shot ASR domain Adaptation using Large Language Models

Add code
Sep 18, 2023
Figure 1 for Corpus Synthesis for Zero-shot ASR domain Adaptation using Large Language Models
Figure 2 for Corpus Synthesis for Zero-shot ASR domain Adaptation using Large Language Models
Figure 3 for Corpus Synthesis for Zero-shot ASR domain Adaptation using Large Language Models
Figure 4 for Corpus Synthesis for Zero-shot ASR domain Adaptation using Large Language Models
Viaarxiv icon

Text is All You Need: Personalizing ASR Models using Controllable Speech Synthesis

Add code
Mar 27, 2023
Figure 1 for Text is All You Need: Personalizing ASR Models using Controllable Speech Synthesis
Figure 2 for Text is All You Need: Personalizing ASR Models using Controllable Speech Synthesis
Figure 3 for Text is All You Need: Personalizing ASR Models using Controllable Speech Synthesis
Figure 4 for Text is All You Need: Personalizing ASR Models using Controllable Speech Synthesis
Viaarxiv icon

Defending Multimodal Fusion Models against Single-Source Adversaries

Add code
Jun 25, 2022
Figure 1 for Defending Multimodal Fusion Models against Single-Source Adversaries
Figure 2 for Defending Multimodal Fusion Models against Single-Source Adversaries
Figure 3 for Defending Multimodal Fusion Models against Single-Source Adversaries
Figure 4 for Defending Multimodal Fusion Models against Single-Source Adversaries
Viaarxiv icon

Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis

Add code
Mar 31, 2022
Figure 1 for Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis
Figure 2 for Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis
Figure 3 for Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis
Figure 4 for Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis
Viaarxiv icon

Optimal Transport using GANs for Lineage Tracing

Add code
Jul 27, 2020
Figure 1 for Optimal Transport using GANs for Lineage Tracing
Figure 2 for Optimal Transport using GANs for Lineage Tracing
Figure 3 for Optimal Transport using GANs for Lineage Tracing
Figure 4 for Optimal Transport using GANs for Lineage Tracing
Viaarxiv icon

Improved Conditional Flow Models for Molecule to Image Synthesis

Add code
Jun 15, 2020
Figure 1 for Improved Conditional Flow Models for Molecule to Image Synthesis
Figure 2 for Improved Conditional Flow Models for Molecule to Image Synthesis
Figure 3 for Improved Conditional Flow Models for Molecule to Image Synthesis
Figure 4 for Improved Conditional Flow Models for Molecule to Image Synthesis
Viaarxiv icon

Telling Left from Right: Learning Spatial Correspondence of Sight and Sound

Add code
Jun 12, 2020
Figure 1 for Telling Left from Right: Learning Spatial Correspondence of Sight and Sound
Figure 2 for Telling Left from Right: Learning Spatial Correspondence of Sight and Sound
Figure 3 for Telling Left from Right: Learning Spatial Correspondence of Sight and Sound
Figure 4 for Telling Left from Right: Learning Spatial Correspondence of Sight and Sound
Viaarxiv icon