Alert button
Picture for Shuangkang Fang

Shuangkang Fang

Alert button

Text-driven Editing of 3D Scenes without Retraining

Add code
Bookmark button
Alert button
Sep 10, 2023
Shuangkang Fang, Yufeng Wang, Yi Yang, Yi-Hsuan Tsai, Wenrui Ding, Ming-Hsuan Yang, Shuchang Zhou

Figure 1 for Text-driven Editing of 3D Scenes without Retraining
Figure 2 for Text-driven Editing of 3D Scenes without Retraining
Figure 3 for Text-driven Editing of 3D Scenes without Retraining
Figure 4 for Text-driven Editing of 3D Scenes without Retraining
Viaarxiv icon

PVD-AL: Progressive Volume Distillation with Active Learning for Efficient Conversion Between Different NeRF Architectures

Add code
Bookmark button
Alert button
Apr 08, 2023
Shuangkang Fang, Yufeng Wang, Yi Yang, Weixin Xu, Heng Wang, Wenrui Ding, Shuchang Zhou

Figure 1 for PVD-AL: Progressive Volume Distillation with Active Learning for Efficient Conversion Between Different NeRF Architectures
Figure 2 for PVD-AL: Progressive Volume Distillation with Active Learning for Efficient Conversion Between Different NeRF Architectures
Figure 3 for PVD-AL: Progressive Volume Distillation with Active Learning for Efficient Conversion Between Different NeRF Architectures
Figure 4 for PVD-AL: Progressive Volume Distillation with Active Learning for Efficient Conversion Between Different NeRF Architectures
Viaarxiv icon

One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation

Add code
Bookmark button
Alert button
Nov 30, 2022
Shuangkang Fang, Weixin Xu, Heng Wang, Yi Yang, Yufeng Wang, Shuchang Zhou

Figure 1 for One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation
Figure 2 for One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation
Figure 3 for One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation
Figure 4 for One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation
Viaarxiv icon

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

Add code
Bookmark button
Alert button
Nov 01, 2021
Weixin Xu, Zipeng Feng, Shuangkang Fang, Song Yuan, Yi Yang, Shuchang Zhou

Figure 1 for Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Figure 2 for Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Figure 3 for Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Figure 4 for Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Viaarxiv icon