Picture for Minjie Cai

Minjie Cai

EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2022: Team HNU-FPV Technical Report

Add code
Jul 07, 2022
Figure 1 for EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2022: Team HNU-FPV Technical Report
Figure 2 for EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2022: Team HNU-FPV Technical Report
Figure 3 for EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2022: Team HNU-FPV Technical Report
Figure 4 for EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2022: Team HNU-FPV Technical Report
Viaarxiv icon

NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results

Add code
May 11, 2022
Figure 1 for NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
Figure 2 for NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
Figure 3 for NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
Figure 4 for NTIRE 2022 Challenge on Efficient Super-Resolution: Methods and Results
Viaarxiv icon

Uncertainty-Aware Model Adaptation for Unsupervised Cross-Domain Object Detection

Add code
Aug 28, 2021
Figure 1 for Uncertainty-Aware Model Adaptation for Unsupervised Cross-Domain Object Detection
Figure 2 for Uncertainty-Aware Model Adaptation for Unsupervised Cross-Domain Object Detection
Figure 3 for Uncertainty-Aware Model Adaptation for Unsupervised Cross-Domain Object Detection
Figure 4 for Uncertainty-Aware Model Adaptation for Unsupervised Cross-Domain Object Detection
Viaarxiv icon

NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Methods and Results

Add code
May 02, 2021
Figure 1 for NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Methods and Results
Figure 2 for NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Methods and Results
Figure 3 for NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Methods and Results
Figure 4 for NTIRE 2021 Challenge on Quality Enhancement of Compressed Video: Methods and Results
Viaarxiv icon

What I See Is What You See: Joint Attention Learning for First and Third Person Video Co-analysis

Add code
Apr 16, 2019
Figure 1 for What I See Is What You See: Joint Attention Learning for First and Third Person Video Co-analysis
Figure 2 for What I See Is What You See: Joint Attention Learning for First and Third Person Video Co-analysis
Figure 3 for What I See Is What You See: Joint Attention Learning for First and Third Person Video Co-analysis
Figure 4 for What I See Is What You See: Joint Attention Learning for First and Third Person Video Co-analysis
Viaarxiv icon

Manipulation-skill Assessment from Videos with Spatial Attention Network

Add code
Jan 09, 2019
Figure 1 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 2 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 3 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 4 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Viaarxiv icon

Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions

Add code
Jan 07, 2019
Figure 1 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 2 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 3 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 4 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Viaarxiv icon

Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes

Add code
Jul 22, 2018
Figure 1 for Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes
Figure 2 for Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes
Figure 3 for Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes
Figure 4 for Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes
Viaarxiv icon

Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition

Add code
Jul 20, 2018
Figure 1 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 2 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 3 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 4 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Viaarxiv icon