Alert button

"Image": models, code, and papers
Alert button

Gated Hierarchical Attention for Image Captioning

Oct 31, 2018
Qingzhong Wang, Antoni B. Chan

Figure 1 for Gated Hierarchical Attention for Image Captioning
Figure 2 for Gated Hierarchical Attention for Image Captioning
Figure 3 for Gated Hierarchical Attention for Image Captioning
Figure 4 for Gated Hierarchical Attention for Image Captioning
Viaarxiv icon

Deep Clustering Activation Maps for Emphysema Subtyping

Jun 01, 2021
Weiyi Xie, Colin Jacobs, Bram van Ginneken

Figure 1 for Deep Clustering Activation Maps for Emphysema Subtyping
Figure 2 for Deep Clustering Activation Maps for Emphysema Subtyping
Viaarxiv icon

Real-World Super-Resolution of Face-Images from Surveillance Cameras

Feb 05, 2021
Andreas Aakerberg, Kamal Nasrollahi, Thomas B. Moeslund

Figure 1 for Real-World Super-Resolution of Face-Images from Surveillance Cameras
Figure 2 for Real-World Super-Resolution of Face-Images from Surveillance Cameras
Figure 3 for Real-World Super-Resolution of Face-Images from Surveillance Cameras
Figure 4 for Real-World Super-Resolution of Face-Images from Surveillance Cameras
Viaarxiv icon

Object Localization Through a Single Multiple-Model Convolutional Neural Network with a Specific Training Approach

Mar 24, 2021
Faraz Lotfi, Farnoosh Faraji, Hamid D. Taghirad

Figure 1 for Object Localization Through a Single Multiple-Model Convolutional Neural Network with a Specific Training Approach
Figure 2 for Object Localization Through a Single Multiple-Model Convolutional Neural Network with a Specific Training Approach
Figure 3 for Object Localization Through a Single Multiple-Model Convolutional Neural Network with a Specific Training Approach
Figure 4 for Object Localization Through a Single Multiple-Model Convolutional Neural Network with a Specific Training Approach
Viaarxiv icon

The Invertible U-Net for Optical-Flow-free Video Interframe Generation

Mar 17, 2021
Saem Park, Donghun Han, Nojun Kwak

Figure 1 for The Invertible U-Net for Optical-Flow-free Video Interframe Generation
Figure 2 for The Invertible U-Net for Optical-Flow-free Video Interframe Generation
Figure 3 for The Invertible U-Net for Optical-Flow-free Video Interframe Generation
Figure 4 for The Invertible U-Net for Optical-Flow-free Video Interframe Generation
Viaarxiv icon

Joining datasets via data augmentation in the label space for neural networks

Jun 17, 2021
Jake Zhao, Mingfeng Ou, Linji Xue, Yunkai Cui, Sai Wu, Gang Chen

Figure 1 for Joining datasets via data augmentation in the label space for neural networks
Figure 2 for Joining datasets via data augmentation in the label space for neural networks
Figure 3 for Joining datasets via data augmentation in the label space for neural networks
Figure 4 for Joining datasets via data augmentation in the label space for neural networks
Viaarxiv icon

Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions

Jul 12, 2021
Arda Sahiner, Tolga Ergen, Batu Ozturkler, Burak Bartan, John Pauly, Morteza Mardani, Mert Pilanci

Figure 1 for Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Figure 2 for Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Figure 3 for Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Figure 4 for Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Viaarxiv icon

RackLay: Multi-Layer Layout Estimation for Warehouse Racks

Mar 17, 2021
Meher Shashwat Nigam, Avinash Prabhu, Anurag Sahu, Puru Gupta, Tanvi Karandikar, N. Sai Shankar, Ravi Kiran Sarvadevabhatla, K. Madhava Krishna

Figure 1 for RackLay: Multi-Layer Layout Estimation for Warehouse Racks
Figure 2 for RackLay: Multi-Layer Layout Estimation for Warehouse Racks
Figure 3 for RackLay: Multi-Layer Layout Estimation for Warehouse Racks
Figure 4 for RackLay: Multi-Layer Layout Estimation for Warehouse Racks
Viaarxiv icon

End-to-end Multi-modal Video Temporal Grounding

Jul 12, 2021
Yi-Wen Chen, Yi-Hsuan Tsai, Ming-Hsuan Yang

Figure 1 for End-to-end Multi-modal Video Temporal Grounding
Figure 2 for End-to-end Multi-modal Video Temporal Grounding
Figure 3 for End-to-end Multi-modal Video Temporal Grounding
Figure 4 for End-to-end Multi-modal Video Temporal Grounding
Viaarxiv icon

On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness

Feb 22, 2021
Eric Mintun, Alexander Kirillov, Saining Xie

Figure 1 for On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness
Figure 2 for On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness
Figure 3 for On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness
Figure 4 for On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness
Viaarxiv icon