Alert button

"Image": models, code, and papers
Alert button

AOSLO-net: A deep learning-based method for automatic segmentation of retinal microaneurysms from adaptive optics scanning laser ophthalmoscope images

Jun 25, 2021
Qian Zhang, Konstantina Sampani, Mengjia Xu, Shengze Cai, Yixiang Deng, He Li, Jennifer K. Sun, George Em Karniadakis

Figure 1 for AOSLO-net: A deep learning-based method for automatic segmentation of retinal microaneurysms from adaptive optics scanning laser ophthalmoscope images
Figure 2 for AOSLO-net: A deep learning-based method for automatic segmentation of retinal microaneurysms from adaptive optics scanning laser ophthalmoscope images
Figure 3 for AOSLO-net: A deep learning-based method for automatic segmentation of retinal microaneurysms from adaptive optics scanning laser ophthalmoscope images
Figure 4 for AOSLO-net: A deep learning-based method for automatic segmentation of retinal microaneurysms from adaptive optics scanning laser ophthalmoscope images
Viaarxiv icon

Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attack

Feb 08, 2021
Ding Sheng Ong, Chee Seng Chan, Kam Woh Ng, Lixin Fan, Qiang Yang

Figure 1 for Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attack
Figure 2 for Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attack
Figure 3 for Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attack
Figure 4 for Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attack
Viaarxiv icon

Soccer Event Detection Using Deep Learning

Add code
Bookmark button
Alert button
Feb 08, 2021
Ali Karimi, Ramin Toosi, Mohammad Ali Akhaee

Figure 1 for Soccer Event Detection Using Deep Learning
Figure 2 for Soccer Event Detection Using Deep Learning
Figure 3 for Soccer Event Detection Using Deep Learning
Figure 4 for Soccer Event Detection Using Deep Learning
Viaarxiv icon

External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

Oct 15, 2018
Jun Xu, Lei Zhang, David Zhang

Figure 1 for External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising
Figure 2 for External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising
Figure 3 for External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising
Figure 4 for External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising
Viaarxiv icon

Using Low-rank Representation of Abundance Maps and Nonnegative Tensor Factorization for Hyperspectral Nonlinear Unmixing

Add code
Bookmark button
Alert button
Mar 30, 2021
Lianru Gao, Zhicheng Wang, Lina Zhuang, Haoyang Yu, Bing Zhang, Jocelyn Chanussot

Figure 1 for Using Low-rank Representation of Abundance Maps and Nonnegative Tensor Factorization for Hyperspectral Nonlinear Unmixing
Figure 2 for Using Low-rank Representation of Abundance Maps and Nonnegative Tensor Factorization for Hyperspectral Nonlinear Unmixing
Figure 3 for Using Low-rank Representation of Abundance Maps and Nonnegative Tensor Factorization for Hyperspectral Nonlinear Unmixing
Figure 4 for Using Low-rank Representation of Abundance Maps and Nonnegative Tensor Factorization for Hyperspectral Nonlinear Unmixing
Viaarxiv icon

Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods

Jul 04, 2021
Patrick Kage, Pavlos Andreadis

Figure 1 for Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods
Figure 2 for Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods
Figure 3 for Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods
Figure 4 for Class Introspection: A Novel Technique for Detecting Unlabeled Subclasses by Leveraging Classifier Explainability Methods
Viaarxiv icon

Visual-Tactile Cross-Modal Data Generation using Residue-Fusion GAN with Feature-Matching and Perceptual Losses

Add code
Bookmark button
Alert button
Jul 12, 2021
Shaoyu Cai, Kening Zhu, Yuki Ban, Takuji Narumi

Figure 1 for Visual-Tactile Cross-Modal Data Generation using Residue-Fusion GAN with Feature-Matching and Perceptual Losses
Figure 2 for Visual-Tactile Cross-Modal Data Generation using Residue-Fusion GAN with Feature-Matching and Perceptual Losses
Figure 3 for Visual-Tactile Cross-Modal Data Generation using Residue-Fusion GAN with Feature-Matching and Perceptual Losses
Figure 4 for Visual-Tactile Cross-Modal Data Generation using Residue-Fusion GAN with Feature-Matching and Perceptual Losses
Viaarxiv icon

Adversarial Examples Make Strong Poisons

Add code
Bookmark button
Alert button
Jun 21, 2021
Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

Figure 1 for Adversarial Examples Make Strong Poisons
Figure 2 for Adversarial Examples Make Strong Poisons
Figure 3 for Adversarial Examples Make Strong Poisons
Figure 4 for Adversarial Examples Make Strong Poisons
Viaarxiv icon

Regularized Evolution for Image Classifier Architecture Search

Add code
Bookmark button
Alert button
Oct 26, 2018
Esteban Real, Alok Aggarwal, Yanping Huang, Quoc V Le

Figure 1 for Regularized Evolution for Image Classifier Architecture Search
Figure 2 for Regularized Evolution for Image Classifier Architecture Search
Figure 3 for Regularized Evolution for Image Classifier Architecture Search
Figure 4 for Regularized Evolution for Image Classifier Architecture Search
Viaarxiv icon

Look Twice: A Computational Model of Return Fixations across Tasks and Species

Add code
Bookmark button
Alert button
Jan 05, 2021
Mengmi Zhang, Will Xiao, Olivia Rose, Katarina Bendtz, Margaret Livingstone, Carlos Ponce, Gabriel Kreiman

Figure 1 for Look Twice: A Computational Model of Return Fixations across Tasks and Species
Figure 2 for Look Twice: A Computational Model of Return Fixations across Tasks and Species
Figure 3 for Look Twice: A Computational Model of Return Fixations across Tasks and Species
Figure 4 for Look Twice: A Computational Model of Return Fixations across Tasks and Species
Viaarxiv icon