Alert button
Picture for Kyle Olszewski

Kyle Olszewski

Alert button

AutoDecoding Latent 3D Diffusion Models

Jul 07, 2023
Evangelos Ntavelis, Aliaksandr Siarohin, Kyle Olszewski, Chaoyang Wang, Luc Van Gool, Sergey Tulyakov

Figure 1 for AutoDecoding Latent 3D Diffusion Models
Figure 2 for AutoDecoding Latent 3D Diffusion Models
Figure 3 for AutoDecoding Latent 3D Diffusion Models
Figure 4 for AutoDecoding Latent 3D Diffusion Models
Viaarxiv icon

Unsupervised Volumetric Animation

Jan 26, 2023
Aliaksandr Siarohin, Willi Menapace, Ivan Skorokhodov, Kyle Olszewski, Jian Ren, Hsin-Ying Lee, Menglei Chai, Sergey Tulyakov

Figure 1 for Unsupervised Volumetric Animation
Figure 2 for Unsupervised Volumetric Animation
Figure 3 for Unsupervised Volumetric Animation
Figure 4 for Unsupervised Volumetric Animation
Viaarxiv icon

ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes

Dec 12, 2022
Ahmed Abdelreheem, Kyle Olszewski, Hsin-Ying Lee, Peter Wonka, Panos Achlioptas

Figure 1 for ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
Figure 2 for ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
Figure 3 for ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
Figure 4 for ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
Viaarxiv icon

Cross-Modal 3D Shape Generation and Manipulation

Jul 24, 2022
Zezhou Cheng, Menglei Chai, Jian Ren, Hsin-Ying Lee, Kyle Olszewski, Zeng Huang, Subhransu Maji, Sergey Tulyakov

Figure 1 for Cross-Modal 3D Shape Generation and Manipulation
Figure 2 for Cross-Modal 3D Shape Generation and Manipulation
Figure 3 for Cross-Modal 3D Shape Generation and Manipulation
Figure 4 for Cross-Modal 3D Shape Generation and Manipulation
Viaarxiv icon

Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation

Jun 15, 2022
Ye Zhu, Yu Wu, Kyle Olszewski, Jian Ren, Sergey Tulyakov, Yan Yan

Figure 1 for Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Figure 2 for Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Figure 3 for Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Figure 4 for Discrete Contrastive Diffusion for Cross-Modal and Conditional Generation
Viaarxiv icon

Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation

Apr 22, 2022
Verica Lazova, Vladimir Guzov, Kyle Olszewski, Sergey Tulyakov, Gerard Pons-Moll

Figure 1 for Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Figure 2 for Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Figure 3 for Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Figure 4 for Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Viaarxiv icon

Quantized GAN for Complex Music Generation from Dance Videos

Apr 01, 2022
Ye Zhu, Kyle Olszewski, Yu Wu, Panos Achlioptas, Menglei Chai, Yan Yan, Sergey Tulyakov

Figure 1 for Quantized GAN for Complex Music Generation from Dance Videos
Figure 2 for Quantized GAN for Complex Music Generation from Dance Videos
Figure 3 for Quantized GAN for Complex Music Generation from Dance Videos
Figure 4 for Quantized GAN for Complex Music Generation from Dance Videos
Viaarxiv icon

R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis

Mar 31, 2022
Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, Yun Fu, Sergey Tulyakov

Figure 1 for R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Figure 2 for R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Figure 3 for R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Figure 4 for R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis
Viaarxiv icon

Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning

Mar 04, 2022
Ligong Han, Jian Ren, Hsin-Ying Lee, Francesco Barbieri, Kyle Olszewski, Shervin Minaee, Dimitris Metaxas, Sergey Tulyakov

Figure 1 for Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Figure 2 for Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Figure 3 for Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Figure 4 for Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Viaarxiv icon

NeROIC: Neural Rendering of Objects from Online Image Collections

Jan 07, 2022
Zhengfei Kuang, Kyle Olszewski, Menglei Chai, Zeng Huang, Panos Achlioptas, Sergey Tulyakov

Figure 1 for NeROIC: Neural Rendering of Objects from Online Image Collections
Figure 2 for NeROIC: Neural Rendering of Objects from Online Image Collections
Figure 3 for NeROIC: Neural Rendering of Objects from Online Image Collections
Figure 4 for NeROIC: Neural Rendering of Objects from Online Image Collections
Viaarxiv icon