Alert button
Picture for Yusuke Sugano

Yusuke Sugano

Alert button

Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement

Add code
Bookmark button
Alert button
May 25, 2023
Jiawei Qin, Takuru Shimoyama, Xucong Zhang, Yusuke Sugano

Figure 1 for Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement
Figure 2 for Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement
Figure 3 for Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement
Figure 4 for Domain-Adaptive Full-Face Gaze Estimation via Novel-View-Synthesis and Feature Disentanglement
Viaarxiv icon

Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation

Add code
Bookmark button
Alert button
May 22, 2023
Yoichiro Hisadome, Tianyi Wu, Jiawei Qin, Yusuke Sugano

Figure 1 for Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation
Figure 2 for Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation
Figure 3 for Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation
Figure 4 for Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation
Viaarxiv icon

Learning Video-independent Eye Contact Segmentation from In-the-Wild Videos

Add code
Bookmark button
Alert button
Oct 05, 2022
Tianyi Wu, Yusuke Sugano

Figure 1 for Learning Video-independent Eye Contact Segmentation from In-the-Wild Videos
Figure 2 for Learning Video-independent Eye Contact Segmentation from In-the-Wild Videos
Figure 3 for Learning Video-independent Eye Contact Segmentation from In-the-Wild Videos
Figure 4 for Learning Video-independent Eye Contact Segmentation from In-the-Wild Videos
Viaarxiv icon

Learning-by-Novel-View-Synthesis for Full-Face Appearance-based 3D Gaze Estimation

Add code
Bookmark button
Alert button
Jan 23, 2022
Jiawei Qin, Takuru Shimoyama, Yusuke Sugano

Figure 1 for Learning-by-Novel-View-Synthesis for Full-Face Appearance-based 3D Gaze Estimation
Figure 2 for Learning-by-Novel-View-Synthesis for Full-Face Appearance-based 3D Gaze Estimation
Figure 3 for Learning-by-Novel-View-Synthesis for Full-Face Appearance-based 3D Gaze Estimation
Figure 4 for Learning-by-Novel-View-Synthesis for Full-Face Appearance-based 3D Gaze Estimation
Viaarxiv icon

Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips

Add code
Bookmark button
Alert button
Dec 02, 2021
Lijin Yang, Yifei Huang, Yusuke Sugano, Yoichi Sato

Figure 1 for Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips
Figure 2 for Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips
Figure 3 for Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips
Figure 4 for Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips
Viaarxiv icon

Ego4D: Around the World in 3,000 Hours of Egocentric Video

Add code
Bookmark button
Alert button
Oct 13, 2021
Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Christian Fuegen, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

Figure 1 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 2 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 3 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 4 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Viaarxiv icon

EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report

Add code
Bookmark button
Alert button
Jul 01, 2021
Lijin Yang, Yifei Huang, Yusuke Sugano, Yoichi Sato

Figure 1 for EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report
Figure 2 for EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report
Figure 3 for EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report
Figure 4 for EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report
Viaarxiv icon

DRIV100: In-The-Wild Multi-Domain Dataset and Evaluation for Real-World Domain Adaptation of Semantic Segmentation

Add code
Bookmark button
Alert button
Feb 25, 2021
Haruya Sakashita, Christoph Flothow, Noriko Takemura, Yusuke Sugano

Figure 1 for DRIV100: In-The-Wild Multi-Domain Dataset and Evaluation for Real-World Domain Adaptation of Semantic Segmentation
Figure 2 for DRIV100: In-The-Wild Multi-Domain Dataset and Evaluation for Real-World Domain Adaptation of Semantic Segmentation
Figure 3 for DRIV100: In-The-Wild Multi-Domain Dataset and Evaluation for Real-World Domain Adaptation of Semantic Segmentation
Figure 4 for DRIV100: In-The-Wild Multi-Domain Dataset and Evaluation for Real-World Domain Adaptation of Semantic Segmentation
Viaarxiv icon