Alert button
Picture for Zhenqiang Li

Zhenqiang Li

Alert button

Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion

Jan 19, 2024
Zuoyue Li, Zhenqiang Li, Zhaopeng Cui, Marc Pollefeys, Martin R. Oswald

Viaarxiv icon

Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge

May 11, 2023
Aneeq Zia, Kiran Bhattacharyya, Xi Liu, Max Berniker, Ziheng Wang, Rogerio Nespolo, Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa, Bo Liu, David Austin, Yiheng Wang, Michal Futrega, Jean-Francois Puget, Zhenqiang Li, Yoichi Sato, Ryo Fujii, Ryo Hachiuma, Mana Masuda, Hideo Saito, An Wang, Mengya Xu, Mobarakol Islam, Long Bai, Winnie Pang, Hongliang Ren, Chinedu Nwoye, Luca Sestini, Nicolas Padoy, Maximilian Nielsen, Samuel Schüttler, Thilo Sentker, Hümeyra Husseini, Ivo Baltruschat, Rüdiger Schmitz, René Werner, Aleksandr Matsun, Mugariya Farooq, Numan Saaed, Jose Renato Restom Viera, Mohammad Yaqub, Neil Getty, Fangfang Xia, Zixuan Zhao, Xiaotian Duan, Xing Yao, Ange Lou, Hao Yang, Jintong Han, Jack Noble, Jie Ying Wu, Tamer Abdulbaki Alshirbaji, Nour Aldeen Jalal, Herag Arabian, Ning Ding, Knut Moeller, Weiliang Chen, Quan He, Lena Maier-Hein, Danail Stoyanov, Stefanie Speidel, Anthony Jarc

Figure 1 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 2 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 3 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Figure 4 for Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge
Viaarxiv icon

Surgical Skill Assessment via Video Semantic Aggregation

Aug 04, 2022
Zhenqiang Li, Lin Gu, Weimin Wang, Ryosuke Nakamura, Yoichi Sato

Figure 1 for Surgical Skill Assessment via Video Semantic Aggregation
Figure 2 for Surgical Skill Assessment via Video Semantic Aggregation
Figure 3 for Surgical Skill Assessment via Video Semantic Aggregation
Figure 4 for Surgical Skill Assessment via Video Semantic Aggregation
Viaarxiv icon

CompNVS: Novel View Synthesis with Scene Completion

Jul 23, 2022
Zuoyue Li, Tianxing Fan, Zhenqiang Li, Zhaopeng Cui, Yoichi Sato, Marc Pollefeys, Martin R. Oswald

Figure 1 for CompNVS: Novel View Synthesis with Scene Completion
Figure 2 for CompNVS: Novel View Synthesis with Scene Completion
Figure 3 for CompNVS: Novel View Synthesis with Scene Completion
Figure 4 for CompNVS: Novel View Synthesis with Scene Completion
Viaarxiv icon

Ego4D: Around the World in 3,000 Hours of Egocentric Video

Oct 13, 2021
Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Christian Fuegen, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

Figure 1 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 2 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 3 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 4 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Viaarxiv icon

Spatio-Temporal Perturbations for Video Attribution

Sep 01, 2021
Zhenqiang Li, Weimin Wang, Zuoyue Li, Yifei Huang, Yoichi Sato

Figure 1 for Spatio-Temporal Perturbations for Video Attribution
Figure 2 for Spatio-Temporal Perturbations for Video Attribution
Figure 3 for Spatio-Temporal Perturbations for Video Attribution
Figure 4 for Spatio-Temporal Perturbations for Video Attribution
Viaarxiv icon

A Comprehensive Study on Visual Explanations for Spatio-temporal Networks

May 01, 2020
Zhenqiang Li, Weimin Wang, Zuoyue Li, Yifei Huang, Yoichi Sato

Figure 1 for A Comprehensive Study on Visual Explanations for Spatio-temporal Networks
Figure 2 for A Comprehensive Study on Visual Explanations for Spatio-temporal Networks
Figure 3 for A Comprehensive Study on Visual Explanations for Spatio-temporal Networks
Figure 4 for A Comprehensive Study on Visual Explanations for Spatio-temporal Networks
Viaarxiv icon

Manipulation-skill Assessment from Videos with Spatial Attention Network

Jan 09, 2019
Zhenqiang Li, Yifei Huang, Minjie Cai, Yoichi Sato

Figure 1 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 2 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 3 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Figure 4 for Manipulation-skill Assessment from Videos with Spatial Attention Network
Viaarxiv icon

Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions

Jan 07, 2019
Yifei Huang, Minjie Cai, Zhenqiang Li, Yoichi Sato

Figure 1 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 2 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 3 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Figure 4 for Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions
Viaarxiv icon

Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition

Jul 20, 2018
Yifei Huang, Minjie Cai, Zhenqiang Li, Yoichi Sato

Figure 1 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 2 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 3 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Figure 4 for Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Viaarxiv icon