Picture for Anthony G. Cohn

Anthony G. Cohn

Exploring the GLIDE model for Human Action-effect Prediction

Add code
Aug 01, 2022
Figure 1 for Exploring the GLIDE model for Human Action-effect Prediction
Figure 2 for Exploring the GLIDE model for Human Action-effect Prediction
Figure 3 for Exploring the GLIDE model for Human Action-effect Prediction
Figure 4 for Exploring the GLIDE model for Human Action-effect Prediction
Viaarxiv icon

Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on Neural Representation and Self-Supervision on Neural Eigenspace

Add code
Feb 19, 2021
Figure 1 for Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on Neural Representation and Self-Supervision on Neural Eigenspace
Figure 2 for Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on Neural Representation and Self-Supervision on Neural Eigenspace
Figure 3 for Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on Neural Representation and Self-Supervision on Neural Eigenspace
Figure 4 for Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on Neural Representation and Self-Supervision on Neural Eigenspace
Viaarxiv icon

Defect segmentation: Mapping tunnel lining internal defects with ground penetrating radar data using a convolutional neural network

Add code
Mar 29, 2020
Figure 1 for Defect segmentation: Mapping tunnel lining internal defects with ground penetrating radar data using a convolutional neural network
Figure 2 for Defect segmentation: Mapping tunnel lining internal defects with ground penetrating radar data using a convolutional neural network
Figure 3 for Defect segmentation: Mapping tunnel lining internal defects with ground penetrating radar data using a convolutional neural network
Figure 4 for Defect segmentation: Mapping tunnel lining internal defects with ground penetrating radar data using a convolutional neural network
Viaarxiv icon

Human-like Planning for Reaching in Cluttered Environments

Add code
Mar 03, 2020
Figure 1 for Human-like Planning for Reaching in Cluttered Environments
Figure 2 for Human-like Planning for Reaching in Cluttered Environments
Figure 3 for Human-like Planning for Reaching in Cluttered Environments
Figure 4 for Human-like Planning for Reaching in Cluttered Environments
Viaarxiv icon

GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining

Add code
Dec 13, 2019
Figure 1 for GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining
Figure 2 for GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining
Figure 3 for GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining
Figure 4 for GPRInvNet: Deep Learning-Based Ground Penetrating Radar Data Inversion for Tunnel Lining
Viaarxiv icon

ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition

Add code
Mar 13, 2018
Figure 1 for ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition
Figure 2 for ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition
Figure 3 for ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition
Figure 4 for ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition
Viaarxiv icon

CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations

Add code
Sep 21, 2017
Figure 1 for CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations
Figure 2 for CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations
Figure 3 for CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations
Figure 4 for CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations
Viaarxiv icon

The STRANDS Project: Long-Term Autonomy in Everyday Environments

Add code
Oct 14, 2016
Figure 1 for The STRANDS Project: Long-Term Autonomy in Everyday Environments
Figure 2 for The STRANDS Project: Long-Term Autonomy in Everyday Environments
Figure 3 for The STRANDS Project: Long-Term Autonomy in Everyday Environments
Figure 4 for The STRANDS Project: Long-Term Autonomy in Everyday Environments
Viaarxiv icon

Reasoning with Topological and Directional Spatial Information

Add code
Sep 01, 2009
Figure 1 for Reasoning with Topological and Directional Spatial Information
Figure 2 for Reasoning with Topological and Directional Spatial Information
Figure 3 for Reasoning with Topological and Directional Spatial Information
Figure 4 for Reasoning with Topological and Directional Spatial Information
Viaarxiv icon