Alert button

"Image": models, code, and papers
Alert button

Efficient subtyping of ovarian cancer histopathology whole slide images using active sampling in multiple instance learning

Feb 22, 2023
Jack Breen, Katie Allen, Kieran Zucker, Geoff Hall, Nicolas M. Orsi, Nishant Ravikumar

Figure 1 for Efficient subtyping of ovarian cancer histopathology whole slide images using active sampling in multiple instance learning
Figure 2 for Efficient subtyping of ovarian cancer histopathology whole slide images using active sampling in multiple instance learning
Figure 3 for Efficient subtyping of ovarian cancer histopathology whole slide images using active sampling in multiple instance learning
Figure 4 for Efficient subtyping of ovarian cancer histopathology whole slide images using active sampling in multiple instance learning
Viaarxiv icon

Distribution Normalization: An "Effortless" Test-Time Augmentation for Contrastively Learned Visual-language Models

Feb 22, 2023
Yifei Zhou, Juntao Ren, Fengyu Li, Ramin Zabih, Ser-Nam Lim

Figure 1 for Distribution Normalization: An "Effortless" Test-Time Augmentation for Contrastively Learned Visual-language Models
Figure 2 for Distribution Normalization: An "Effortless" Test-Time Augmentation for Contrastively Learned Visual-language Models
Figure 3 for Distribution Normalization: An "Effortless" Test-Time Augmentation for Contrastively Learned Visual-language Models
Figure 4 for Distribution Normalization: An "Effortless" Test-Time Augmentation for Contrastively Learned Visual-language Models
Viaarxiv icon

Semi-Supervised Learning with Pseudo-Negative Labels for Image Classification

Jan 10, 2023
Hao Xu, Hui Xiao, Huazheng Hao, Li Dong, Xiaojie Qiu, Chengbin Peng

Figure 1 for Semi-Supervised Learning with Pseudo-Negative Labels for Image Classification
Figure 2 for Semi-Supervised Learning with Pseudo-Negative Labels for Image Classification
Figure 3 for Semi-Supervised Learning with Pseudo-Negative Labels for Image Classification
Figure 4 for Semi-Supervised Learning with Pseudo-Negative Labels for Image Classification
Viaarxiv icon

Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion

Nov 22, 2022
Yuhui Wu, Zhu Liu, Jinyuan Liu, Xin Fan, Risheng Liu

Figure 1 for Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion
Figure 2 for Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion
Figure 3 for Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion
Figure 4 for Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion
Viaarxiv icon

Global Meets Local: Effective Multi-Label Image Classification via Category-Aware Weak Supervision

Nov 23, 2022
Jiawei Zhan, Jun Liu, Wei Tang, Guannan Jiang, Xi Wang, Bin-Bin Gao, Tianliang Zhang, Wenlong Wu, Wei Zhang, Chengjie Wang, Yuan Xie

Figure 1 for Global Meets Local: Effective Multi-Label Image Classification via Category-Aware Weak Supervision
Figure 2 for Global Meets Local: Effective Multi-Label Image Classification via Category-Aware Weak Supervision
Figure 3 for Global Meets Local: Effective Multi-Label Image Classification via Category-Aware Weak Supervision
Figure 4 for Global Meets Local: Effective Multi-Label Image Classification via Category-Aware Weak Supervision
Viaarxiv icon

Take Me Home: Reversing Distribution Shifts using Reinforcement Learning

Feb 20, 2023
Vivian Lin, Kuk Jang, Souradeep Dutta, Michele Caprio, Oleg Sokolsky, Insup Lee

Figure 1 for Take Me Home: Reversing Distribution Shifts using Reinforcement Learning
Figure 2 for Take Me Home: Reversing Distribution Shifts using Reinforcement Learning
Figure 3 for Take Me Home: Reversing Distribution Shifts using Reinforcement Learning
Figure 4 for Take Me Home: Reversing Distribution Shifts using Reinforcement Learning
Viaarxiv icon

Zero-shot-Learning Cross-Modality Data Translation Through Mutual Information Guided Stochastic Diffusion

Jan 31, 2023
Zihao Wang, Yingyu Yang, Maxime Sermesant, Hervé Delingette, Ona Wu

Figure 1 for Zero-shot-Learning Cross-Modality Data Translation Through Mutual Information Guided Stochastic Diffusion
Figure 2 for Zero-shot-Learning Cross-Modality Data Translation Through Mutual Information Guided Stochastic Diffusion
Figure 3 for Zero-shot-Learning Cross-Modality Data Translation Through Mutual Information Guided Stochastic Diffusion
Figure 4 for Zero-shot-Learning Cross-Modality Data Translation Through Mutual Information Guided Stochastic Diffusion
Viaarxiv icon

Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets

Jan 31, 2023
Hussein Hazimeh, Natalia Ponomareva

Figure 1 for Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets
Figure 2 for Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets
Figure 3 for Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets
Figure 4 for Mind the (optimality) Gap: A Gap-Aware Learning Rate Scheduler for Adversarial Nets
Viaarxiv icon

Surround-View Vision-based 3D Detection for Autonomous Driving: A Survey

Feb 13, 2023
Apoorv Singh, Varun Bankiti

Figure 1 for Surround-View Vision-based 3D Detection for Autonomous Driving: A Survey
Figure 2 for Surround-View Vision-based 3D Detection for Autonomous Driving: A Survey
Figure 3 for Surround-View Vision-based 3D Detection for Autonomous Driving: A Survey
Figure 4 for Surround-View Vision-based 3D Detection for Autonomous Driving: A Survey
Viaarxiv icon

Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions

Feb 13, 2023
Henrik Voigt, Jan Hombeck, Monique Meuschke, Kai Lawonn, Sina Zarrieß

Figure 1 for Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions
Figure 2 for Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions
Figure 3 for Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions
Figure 4 for Paparazzi: A Deep Dive into the Capabilities of Language and Vision Models for Grounding Viewpoint Descriptions
Viaarxiv icon