Alert button

"Image": models, code, and papers
Alert button

PTQD: Accurate Post-Training Quantization for Diffusion Models

May 18, 2023
Yefei He, Luping Liu, Jing Liu, Weijia Wu, Hong Zhou, Bohan Zhuang

Figure 1 for PTQD: Accurate Post-Training Quantization for Diffusion Models
Figure 2 for PTQD: Accurate Post-Training Quantization for Diffusion Models
Figure 3 for PTQD: Accurate Post-Training Quantization for Diffusion Models
Figure 4 for PTQD: Accurate Post-Training Quantization for Diffusion Models
Viaarxiv icon

Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations

May 18, 2023
Lyle Regenwetter, Yazan Abu Obaideh, Faez Ahmed

Figure 1 for Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations
Figure 2 for Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations
Figure 3 for Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations
Figure 4 for Counterfactuals for Design: A Model-Agnostic Method For Design Recommendations
Viaarxiv icon

Look ATME: The Discriminator Mean Entropy Needs Attention

Apr 18, 2023
Edgardo Solano-Carrillo, Angel Bueno Rodriguez, Borja Carrillo-Perez, Yannik Steiniger, Jannis Stoppe

Figure 1 for Look ATME: The Discriminator Mean Entropy Needs Attention
Figure 2 for Look ATME: The Discriminator Mean Entropy Needs Attention
Figure 3 for Look ATME: The Discriminator Mean Entropy Needs Attention
Figure 4 for Look ATME: The Discriminator Mean Entropy Needs Attention
Viaarxiv icon

The Second Monocular Depth Estimation Challenge

Apr 26, 2023
Jaime Spencer, C. Stella Qian, Michaela Trescakova, Chris Russell, Simon Hadfield, Erich W. Graf, Wendy J. Adams, Andrew J. Schofield, James Elder, Richard Bowden, Ali Anwar, Hao Chen, Xiaozhi Chen, Kai Cheng, Yuchao Dai, Huynh Thai Hoa, Sadat Hossain, Jianmian Huang, Mohan Jing, Bo Li, Chao Li, Baojun Li, Zhiwen Liu, Stefano Mattoccia, Siegfried Mercelis, Myungwoo Nam, Matteo Poggi, Xiaohua Qi, Jiahui Ren, Yang Tang, Fabio Tosi, Linh Trinh, S. M. Nadim Uddin, Khan Muhammad Umair, Kaixuan Wang, Yufei Wang, Yixing Wang, Mochu Xiang, Guangkai Xu, Wei Yin, Jun Yu, Qi Zhang, Chaoqiang Zhao

Figure 1 for The Second Monocular Depth Estimation Challenge
Figure 2 for The Second Monocular Depth Estimation Challenge
Figure 3 for The Second Monocular Depth Estimation Challenge
Figure 4 for The Second Monocular Depth Estimation Challenge
Viaarxiv icon

Identity Encoder for Personalized Diffusion

Apr 14, 2023
Yu-Chuan Su, Kelvin C. K. Chan, Yandong Li, Yang Zhao, Han Zhang, Boqing Gong, Huisheng Wang, Xuhui Jia

Figure 1 for Identity Encoder for Personalized Diffusion
Figure 2 for Identity Encoder for Personalized Diffusion
Figure 3 for Identity Encoder for Personalized Diffusion
Figure 4 for Identity Encoder for Personalized Diffusion
Viaarxiv icon

Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text

Apr 14, 2023
Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, Yejin Choi

Figure 1 for Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text
Figure 2 for Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text
Figure 3 for Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text
Figure 4 for Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text
Viaarxiv icon

Trash to Treasure: Using text-to-image models to inform the design of physical artefacts

Feb 01, 2023
Amy Smith, Hope Schroeder, Ziv Epstein, Michael Cook, Simon Colton, Andrew Lippman

Figure 1 for Trash to Treasure: Using text-to-image models to inform the design of physical artefacts
Figure 2 for Trash to Treasure: Using text-to-image models to inform the design of physical artefacts
Figure 3 for Trash to Treasure: Using text-to-image models to inform the design of physical artefacts
Figure 4 for Trash to Treasure: Using text-to-image models to inform the design of physical artefacts
Viaarxiv icon

Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention

May 12, 2023
Zizhang Wu, Zhuozheng Li, Zhi-Gang Fan, Yunzhe Wu, Yuanzhu Gan, Jian Pu, Xianzhi Li

Figure 1 for Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention
Figure 2 for Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention
Figure 3 for Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention
Figure 4 for Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention
Viaarxiv icon

NLIP: Noise-robust Language-Image Pre-training

Jan 04, 2023
Runhui Huang, Yanxin Long, Jianhua Han, Hang Xu, Xiwen Liang, Chunjing Xu, Xiaodan Liang

Figure 1 for NLIP: Noise-robust Language-Image Pre-training
Figure 2 for NLIP: Noise-robust Language-Image Pre-training
Figure 3 for NLIP: Noise-robust Language-Image Pre-training
Figure 4 for NLIP: Noise-robust Language-Image Pre-training
Viaarxiv icon

SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, and More

Apr 19, 2023
Tianrun Chen, Lanyun Zhu, Chaotao Ding, Runlong Cao, Yan Wang, Zejian Li, Lingyun Sun, Papa Mao, Ying Zang

Figure 1 for SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, and More
Figure 2 for SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, and More
Figure 3 for SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, and More
Figure 4 for SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, and More
Viaarxiv icon