Alert button

"Information": models, code, and papers
Alert button

NashAE: Disentangling Representations through Adversarial Covariance Minimization

Sep 21, 2022
Eric Yeats, Frank Liu, David Womble, Hai Li

Viaarxiv icon

Basic Binary Convolution Unit for Binarized Image Restoration Network

Oct 02, 2022
Bin Xia, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Radu Timofte, Luc Van Gool

Figure 1 for Basic Binary Convolution Unit for Binarized Image Restoration Network
Figure 2 for Basic Binary Convolution Unit for Binarized Image Restoration Network
Figure 3 for Basic Binary Convolution Unit for Binarized Image Restoration Network
Figure 4 for Basic Binary Convolution Unit for Binarized Image Restoration Network
Viaarxiv icon

DARE: A large-scale handwritten date recognition system

Oct 02, 2022
Christian M. Dahl, Torben S. D. Johansen, Emil N. Sørensen, Christian E. Westermann, Simon F. Wittrock

Figure 1 for DARE: A large-scale handwritten date recognition system
Figure 2 for DARE: A large-scale handwritten date recognition system
Figure 3 for DARE: A large-scale handwritten date recognition system
Figure 4 for DARE: A large-scale handwritten date recognition system
Viaarxiv icon

Uncertainty estimations methods for a deep learning model to aid in clinical decision-making -- a clinician's perspective

Oct 02, 2022
Michael Dohopolski, Kai Wang, Biling Wang, Ti Bai, Dan Nguyen, David Sher, Steve Jiang, Jing Wang

Figure 1 for Uncertainty estimations methods for a deep learning model to aid in clinical decision-making -- a clinician's perspective
Figure 2 for Uncertainty estimations methods for a deep learning model to aid in clinical decision-making -- a clinician's perspective
Figure 3 for Uncertainty estimations methods for a deep learning model to aid in clinical decision-making -- a clinician's perspective
Figure 4 for Uncertainty estimations methods for a deep learning model to aid in clinical decision-making -- a clinician's perspective
Viaarxiv icon

Paging with Succinct Predictions

Oct 06, 2022
Antonios Antoniadis, Joan Boyar, Marek Eliáš, Lene M. Favrholdt, Ruben Hoeksma, Kim S. Larsen, Adam Polak, Bertrand Simon

Viaarxiv icon

Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus

Sep 29, 2022
Gang Li, Yang Li

Figure 1 for Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus
Figure 2 for Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus
Figure 3 for Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus
Figure 4 for Spotlight: Mobile UI Understanding using Vision-Language Models with a Focus
Viaarxiv icon

Domain-Unified Prompt Representations for Source-Free Domain Generalization

Sep 29, 2022
Hongjing Niu, Hanting Li, Feng Zhao, Bin Li

Figure 1 for Domain-Unified Prompt Representations for Source-Free Domain Generalization
Figure 2 for Domain-Unified Prompt Representations for Source-Free Domain Generalization
Figure 3 for Domain-Unified Prompt Representations for Source-Free Domain Generalization
Figure 4 for Domain-Unified Prompt Representations for Source-Free Domain Generalization
Viaarxiv icon

Distribution Aware Metrics for Conditional Natural Language Generation

Sep 29, 2022
David M Chan, Yiming Ni, David A Ross, Sudheendra Vijayanarasimhan, Austin Myers, John Canny

Figure 1 for Distribution Aware Metrics for Conditional Natural Language Generation
Figure 2 for Distribution Aware Metrics for Conditional Natural Language Generation
Figure 3 for Distribution Aware Metrics for Conditional Natural Language Generation
Figure 4 for Distribution Aware Metrics for Conditional Natural Language Generation
Viaarxiv icon

MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models

Oct 04, 2022
Chenglin Yang, Siyuan Qiao, Qihang Yu, Xiaoding Yuan, Yukun Zhu, Alan Yuille, Hartwig Adam, Liang-Chieh Chen

Figure 1 for MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Figure 2 for MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Figure 3 for MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Figure 4 for MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Viaarxiv icon

Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective

Oct 04, 2022
Peter Baumgartner, Daniel Smith, Mashud Rana, Reena Kapoor, Elena Tartaglia, Andreas Schutt, Ashfaqur Rahman, John Taylor, Simon Dunstall

Figure 1 for Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective
Figure 2 for Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective
Figure 3 for Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective
Figure 4 for Movement Analytics: Current Status, Application to Manufacturing, and Future Prospects from an AI Perspective
Viaarxiv icon