Picture for Zhenqiang Li

Zhenqiang Li

Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion

Add code
Jan 19, 2024
Figure 1 for Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion
Figure 2 for Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion
Figure 3 for Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion
Figure 4 for Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion
Viaarxiv icon

Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge

Add code
May 11, 2023
Figure 1 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 2 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 3 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 4 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Viaarxiv icon

Surgical Skill Assessment via Video Semantic Aggregation

Add code
Aug 04, 2022
Figure 1 for Surgical Skill Assessment via Video Semantic Aggregation
Figure 2 for Surgical Skill Assessment via Video Semantic Aggregation
Figure 3 for Surgical Skill Assessment via Video Semantic Aggregation
Figure 4 for Surgical Skill Assessment via Video Semantic Aggregation
Viaarxiv icon

CompNVS: Novel View Synthesis with Scene Completion

Add code
Jul 23, 2022
Figure 1 for CompNVS: Novel View Synthesis with Scene Completion
Figure 2 for CompNVS: Novel View Synthesis with Scene Completion
Figure 3 for CompNVS: Novel View Synthesis with Scene Completion
Figure 4 for CompNVS: Novel View Synthesis with Scene Completion
Viaarxiv icon

Ego4D: Around the World in 3,000 Hours of Egocentric Video

Add code
Oct 13, 2021
Figure 1 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 2 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 3 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 4 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Viaarxiv icon

Spatio-Temporal Perturbations for Video Attribution

Add code
Sep 01, 2021
Figure 1 for Spatio-Temporal Perturbations for Video Attribution
Figure 2 for Spatio-Temporal Perturbations for Video Attribution
Figure 3 for Spatio-Temporal Perturbations for Video Attribution
Figure 4 for Spatio-Temporal Perturbations for Video Attribution
Viaarxiv icon

A Comprehensive Study on Visual Explanations for Spatio-temporal Networks

Add code
May 01, 2020
Figure 1 for A Comprehensive Study on Visual Explanations for Spatio-temporal Networks
Figure 2 for A Comprehensive Study on Visual Explanations for Spatio-temporal Networks
Figure 3 for A Comprehensive Study on Visual Explanations for Spatio-temporal Networks
Figure 4 for A Comprehensive Study on Visual Explanations for Spatio-temporal Networks
Viaarxiv icon

Manipulation-skill Assessment from Videos with Spatial Attention Network

Add code
Jan 09, 2019
Figure 1 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 2 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 3 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 4 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Viaarxiv icon

Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions

Add code
Jan 07, 2019
Figure 1 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 2 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 3 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 4 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Viaarxiv icon

Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition

Add code
Jul 20, 2018
Figure 1 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 2 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 3 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 4 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Viaarxiv icon