Alert button

"Image": models, code, and papers
Alert button

SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image

Apr 02, 2022
Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Humphrey Shi, Zhangyang Wang

Figure 1 for SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
Figure 2 for SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
Figure 3 for SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
Figure 4 for SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single Image
Viaarxiv icon

Edge Coverage Path Planning for Robot Mowing

Sep 12, 2022
Zhaofeng Tian, Weisong Shi

Figure 1 for Edge Coverage Path Planning for Robot Mowing
Figure 2 for Edge Coverage Path Planning for Robot Mowing
Figure 3 for Edge Coverage Path Planning for Robot Mowing
Figure 4 for Edge Coverage Path Planning for Robot Mowing
Viaarxiv icon

Probing Contextual Diversity for Dense Out-of-Distribution Detection

Aug 30, 2022
Silvio Galesso, Maria Alejandra Bravo, Mehdi Naouar, Thomas Brox

Figure 1 for Probing Contextual Diversity for Dense Out-of-Distribution Detection
Figure 2 for Probing Contextual Diversity for Dense Out-of-Distribution Detection
Figure 3 for Probing Contextual Diversity for Dense Out-of-Distribution Detection
Figure 4 for Probing Contextual Diversity for Dense Out-of-Distribution Detection
Viaarxiv icon

Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer

Sep 06, 2022
Cassidy Pirlot, Richard C. Gerum, Cory Efird, Joel Zylberberg, Alona Fyshe

Figure 1 for Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer
Figure 2 for Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer
Figure 3 for Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer
Figure 4 for Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer
Viaarxiv icon

Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors

Sep 06, 2022
Xi Wang, Gen Li, Yen-Ling Kuo, Muhammed Kocabas, Emre Aksan, Otmar Hilliges

Figure 1 for Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors
Figure 2 for Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors
Figure 3 for Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors
Figure 4 for Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors
Viaarxiv icon

Semantic decoupled representation learning for remote sensing image change detection

Jan 15, 2022
Hao Chen, Yifan Zao, Liqin Liu, Song Chen, Zhenwei Shi

Figure 1 for Semantic decoupled representation learning for remote sensing image change detection
Figure 2 for Semantic decoupled representation learning for remote sensing image change detection
Viaarxiv icon

Hyperspectral Image Super-resolution with Deep Priors and Degradation Model Inversion

Jan 24, 2022
Xiuheng Wang, Jie Chen, Cédric Richard

Figure 1 for Hyperspectral Image Super-resolution with Deep Priors and Degradation Model Inversion
Figure 2 for Hyperspectral Image Super-resolution with Deep Priors and Degradation Model Inversion
Figure 3 for Hyperspectral Image Super-resolution with Deep Priors and Degradation Model Inversion
Viaarxiv icon

E Pluribus Unum Interpretable Convolutional Neural Networks

Aug 10, 2022
George Dimas, Eirini Cholopoulou, Dimitris K. Iakovidis

Figure 1 for E Pluribus Unum Interpretable Convolutional Neural Networks
Figure 2 for E Pluribus Unum Interpretable Convolutional Neural Networks
Figure 3 for E Pluribus Unum Interpretable Convolutional Neural Networks
Figure 4 for E Pluribus Unum Interpretable Convolutional Neural Networks
Viaarxiv icon

What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs

Jun 27, 2022
Tal Shaharabany, Yoad Tewel, Lior Wolf

Figure 1 for What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs
Figure 2 for What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs
Figure 3 for What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs
Figure 4 for What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs
Viaarxiv icon

Automated Defect Recognition of Castings defects using Neural Networks

Sep 06, 2022
Alberto García-Pérez, María José Gómez-Silva, Arturo de la Escalera

Viaarxiv icon