Picture for Antonino Furnari

Antonino Furnari

FPV@IPLAB - Department of Mathematics and Computer Science - University of Catania - Italy, Next Vision s.r.l. - Catania - Italy

Synchronization is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs

Add code
Dec 05, 2023
Figure 1 for Synchronization is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs
Figure 2 for Synchronization is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs
Figure 3 for Synchronization is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs
Figure 4 for Synchronization is All You Need: Exocentric-to-Egocentric Transfer for Temporal Action Segmentation with Unlabeled Synchronized Video Pairs
Viaarxiv icon

Are Synthetic Data Useful for Egocentric Hand-Object Interaction Detection? An Investigation and the HOI-Synth Domain Adaptation Benchmark

Add code
Dec 05, 2023
Viaarxiv icon

Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives

Add code
Nov 30, 2023
Figure 1 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 2 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 3 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 4 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Viaarxiv icon

ENIGMA-51: Towards a Fine-Grained Understanding of Human-Object Interactions in Industrial Scenarios

Add code
Sep 26, 2023
Viaarxiv icon

An Outlook into the Future of Egocentric Vision

Add code
Aug 14, 2023
Figure 1 for An Outlook into the Future of Egocentric Vision
Figure 2 for An Outlook into the Future of Egocentric Vision
Figure 3 for An Outlook into the Future of Egocentric Vision
Figure 4 for An Outlook into the Future of Egocentric Vision
Viaarxiv icon

Streaming egocentric action anticipation: An evaluation scheme and approach

Add code
Jun 29, 2023
Figure 1 for Streaming egocentric action anticipation: An evaluation scheme and approach
Figure 2 for Streaming egocentric action anticipation: An evaluation scheme and approach
Figure 3 for Streaming egocentric action anticipation: An evaluation scheme and approach
Figure 4 for Streaming egocentric action anticipation: An evaluation scheme and approach
Viaarxiv icon

Exploiting Multimodal Synthetic Data for Egocentric Human-Object Interaction Detection in an Industrial Scenario

Add code
Jun 21, 2023
Figure 1 for Exploiting Multimodal Synthetic Data for Egocentric Human-Object Interaction Detection in an Industrial Scenario
Figure 2 for Exploiting Multimodal Synthetic Data for Egocentric Human-Object Interaction Detection in an Industrial Scenario
Figure 3 for Exploiting Multimodal Synthetic Data for Egocentric Human-Object Interaction Detection in an Industrial Scenario
Figure 4 for Exploiting Multimodal Synthetic Data for Egocentric Human-Object Interaction Detection in an Industrial Scenario
Viaarxiv icon

StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipation

Add code
Apr 08, 2023
Figure 1 for StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipation
Figure 2 for StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipation
Figure 3 for StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipation
Figure 4 for StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipation
Viaarxiv icon

A Multi Camera Unsupervised Domain Adaptation Pipeline for Object Detection in Cultural Sites through Adversarial Learning and Self-Training

Add code
Oct 03, 2022
Figure 1 for A Multi Camera Unsupervised Domain Adaptation Pipeline for Object Detection in Cultural Sites through Adversarial Learning and Self-Training
Figure 2 for A Multi Camera Unsupervised Domain Adaptation Pipeline for Object Detection in Cultural Sites through Adversarial Learning and Self-Training
Figure 3 for A Multi Camera Unsupervised Domain Adaptation Pipeline for Object Detection in Cultural Sites through Adversarial Learning and Self-Training
Figure 4 for A Multi Camera Unsupervised Domain Adaptation Pipeline for Object Detection in Cultural Sites through Adversarial Learning and Self-Training
Viaarxiv icon

Visual Object Tracking in First Person Vision

Add code
Sep 27, 2022
Figure 1 for Visual Object Tracking in First Person Vision
Figure 2 for Visual Object Tracking in First Person Vision
Figure 3 for Visual Object Tracking in First Person Vision
Figure 4 for Visual Object Tracking in First Person Vision
Viaarxiv icon