Alert button

"Image": models, code, and papers
Alert button

Modeling 3D cardiac contraction and relaxation with point cloud deformation networks

Jul 20, 2023
Marcel Beetz, Abhirup Banerjee, Vicente Grau

Figure 1 for Modeling 3D cardiac contraction and relaxation with point cloud deformation networks
Figure 2 for Modeling 3D cardiac contraction and relaxation with point cloud deformation networks
Figure 3 for Modeling 3D cardiac contraction and relaxation with point cloud deformation networks
Figure 4 for Modeling 3D cardiac contraction and relaxation with point cloud deformation networks
Viaarxiv icon

Disentangled Pre-training for Image Matting

Apr 03, 2023
Yanda Li, Zilong Huang, Gang Yu, Ling Chen, Yunchao Wei, Jianbo Jiao

Figure 1 for Disentangled Pre-training for Image Matting
Figure 2 for Disentangled Pre-training for Image Matting
Figure 3 for Disentangled Pre-training for Image Matting
Figure 4 for Disentangled Pre-training for Image Matting
Viaarxiv icon

Marginal Thresholding in Noisy Image Segmentation

May 04, 2023
Marcus Nordström, Henrik Hult, Atsuto Maki

Figure 1 for Marginal Thresholding in Noisy Image Segmentation
Figure 2 for Marginal Thresholding in Noisy Image Segmentation
Figure 3 for Marginal Thresholding in Noisy Image Segmentation
Figure 4 for Marginal Thresholding in Noisy Image Segmentation
Viaarxiv icon

Collaborative Score Distillation for Consistent Visual Synthesis

Add code
Bookmark button
Alert button
Jul 04, 2023
Subin Kim, Kyungmin Lee, June Suk Choi, Jongheon Jeong, Kihyuk Sohn, Jinwoo Shin

Figure 1 for Collaborative Score Distillation for Consistent Visual Synthesis
Figure 2 for Collaborative Score Distillation for Consistent Visual Synthesis
Figure 3 for Collaborative Score Distillation for Consistent Visual Synthesis
Figure 4 for Collaborative Score Distillation for Consistent Visual Synthesis
Viaarxiv icon

Applying a Color Palette with Local Control using Diffusion Models

Jul 13, 2023
Vaibhav Vavilala, David Forsyth

Figure 1 for Applying a Color Palette with Local Control using Diffusion Models
Figure 2 for Applying a Color Palette with Local Control using Diffusion Models
Figure 3 for Applying a Color Palette with Local Control using Diffusion Models
Figure 4 for Applying a Color Palette with Local Control using Diffusion Models
Viaarxiv icon

Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation

Jul 02, 2023
Tserendorj Adiya, Sanghun Kim, ung Eun Lee, Jae Shin Yoon, Hwasup Lim

Figure 1 for Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation
Figure 2 for Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation
Figure 3 for Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation
Figure 4 for Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation
Viaarxiv icon

SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing

Add code
Bookmark button
Alert button
May 30, 2023
Nazmul Karim, Umar Khalid, Mohsen Joneidi, Chen Chen, Nazanin Rahnavard

Figure 1 for SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing
Figure 2 for SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing
Figure 3 for SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing
Figure 4 for SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing
Viaarxiv icon

Low-Light Image Enhancement via Structure Modeling and Guidance

Add code
Bookmark button
Alert button
May 10, 2023
Xiaogang Xu, Ruixing Wang, Jiangbo Lu

Figure 1 for Low-Light Image Enhancement via Structure Modeling and Guidance
Figure 2 for Low-Light Image Enhancement via Structure Modeling and Guidance
Figure 3 for Low-Light Image Enhancement via Structure Modeling and Guidance
Figure 4 for Low-Light Image Enhancement via Structure Modeling and Guidance
Viaarxiv icon

CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy

Add code
Bookmark button
Alert button
Jun 27, 2023
Xianhang Li, Zeyu Wang, Cihang Xie

Figure 1 for CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy
Figure 2 for CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy
Figure 3 for CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy
Figure 4 for CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a \$10,000 Budget; An Extra \$4,000 Unlocks 81.8% Accuracy
Viaarxiv icon

Unsupervised Spectral Demosaicing with Lightweight Spectral Attention Networks

Jul 05, 2023
Kai Feng, Yongqiang Zhao, Seong G. Kong, Haijin Zeng

Figure 1 for Unsupervised Spectral Demosaicing with Lightweight Spectral Attention Networks
Figure 2 for Unsupervised Spectral Demosaicing with Lightweight Spectral Attention Networks
Figure 3 for Unsupervised Spectral Demosaicing with Lightweight Spectral Attention Networks
Figure 4 for Unsupervised Spectral Demosaicing with Lightweight Spectral Attention Networks
Viaarxiv icon