Alert button
Picture for Triantafyllos Afouras

Triantafyllos Afouras

Alert button

Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives

Nov 30, 2023
Kristen Grauman, Andrew Westbury, Lorenzo Torresani, Kris Kitani, Jitendra Malik, Triantafyllos Afouras, Kumar Ashutosh, Vijay Baiyya, Siddhant Bansal, Bikram Boote, Eugene Byrne, Zach Chavis, Joya Chen, Feng Cheng, Fu-Jen Chu, Sean Crane, Avijit Dasgupta, Jing Dong, Maria Escobar, Cristhian Forigua, Abrham Gebreselasie, Sanjay Haresh, Jing Huang, Md Mohaiminul Islam, Suyog Jain, Rawal Khirodkar, Devansh Kukreja, Kevin J Liang, Jia-Wei Liu, Sagnik Majumder, Yongsen Mao, Miguel Martin, Effrosyni Mavroudi, Tushar Nagarajan, Francesco Ragusa, Santhosh Kumar Ramakrishnan, Luigi Seminara, Arjun Somayazulu, Yale Song, Shan Su, Zihui Xue, Edward Zhang, Jinxu Zhang, Angela Castillo, Changan Chen, Xinzhu Fu, Ryosuke Furuta, Cristina Gonzalez, Prince Gupta, Jiabo Hu, Yifei Huang, Yiming Huang, Weslie Khoo, Anush Kumar, Robert Kuo, Sach Lakhavani, Miao Liu, Mi Luo, Zhengyi Luo, Brighid Meredith, Austin Miller, Oluwatumininu Oguntola, Xiaqing Pan, Penny Peng, Shraman Pramanick, Merey Ramazanova, Fiona Ryan, Wei Shan, Kiran Somasundaram, Chenan Song, Audrey Southerland, Masatoshi Tateno, Huiyu Wang, Yuchen Wang, Takuma Yagi, Mingfei Yan, Xitong Yang, Zecheng Yu, Shengxin Cindy Zha, Chen Zhao, Ziwei Zhao, Zhifan Zhu, Jeff Zhuo, Pablo Arbelaez, Gedas Bertasius, David Crandall, Dima Damen, Jakob Engel, Giovanni Maria Farinella, Antonino Furnari, Bernard Ghanem, Judy Hoffman, C. V. Jawahar, Richard Newcombe, Hyun Soo Park, James M. Rehg, Yoichi Sato, Manolis Savva, Jianbo Shi, Mike Zheng Shou, Michael Wray

Figure 1 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 2 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 3 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 4 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Viaarxiv icon

Video-Mined Task Graphs for Keystep Recognition in Instructional Videos

Jul 17, 2023
Kumar Ashutosh, Santhosh Kumar Ramakrishnan, Triantafyllos Afouras, Kristen Grauman

Figure 1 for Video-Mined Task Graphs for Keystep Recognition in Instructional Videos
Figure 2 for Video-Mined Task Graphs for Keystep Recognition in Instructional Videos
Figure 3 for Video-Mined Task Graphs for Keystep Recognition in Instructional Videos
Figure 4 for Video-Mined Task Graphs for Keystep Recognition in Instructional Videos
Viaarxiv icon

Learning to Ground Instructional Articles in Videos through Narrations

Jun 06, 2023
Effrosyni Mavroudi, Triantafyllos Afouras, Lorenzo Torresani

Figure 1 for Learning to Ground Instructional Articles in Videos through Narrations
Figure 2 for Learning to Ground Instructional Articles in Videos through Narrations
Figure 3 for Learning to Ground Instructional Articles in Videos through Narrations
Figure 4 for Learning to Ground Instructional Articles in Videos through Narrations
Viaarxiv icon

Scaling up sign spotting through sign language dictionaries

May 09, 2022
Gül Varol, Liliane Momeni, Samuel Albanie, Triantafyllos Afouras, Andrew Zisserman

Figure 1 for Scaling up sign spotting through sign language dictionaries
Figure 2 for Scaling up sign spotting through sign language dictionaries
Figure 3 for Scaling up sign spotting through sign language dictionaries
Figure 4 for Scaling up sign spotting through sign language dictionaries
Viaarxiv icon

Audio-Visual Synchronisation in the wild

Dec 08, 2021
Honglie Chen, Weidi Xie, Triantafyllos Afouras, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman

Figure 1 for Audio-Visual Synchronisation in the wild
Figure 2 for Audio-Visual Synchronisation in the wild
Figure 3 for Audio-Visual Synchronisation in the wild
Figure 4 for Audio-Visual Synchronisation in the wild
Viaarxiv icon

BBC-Oxford British Sign Language Dataset

Nov 05, 2021
Samuel Albanie, Gül Varol, Liliane Momeni, Hannah Bull, Triantafyllos Afouras, Himel Chowdhury, Neil Fox, Bencie Woll, Rob Cooper, Andrew McParland, Andrew Zisserman

Figure 1 for BBC-Oxford British Sign Language Dataset
Figure 2 for BBC-Oxford British Sign Language Dataset
Figure 3 for BBC-Oxford British Sign Language Dataset
Figure 4 for BBC-Oxford British Sign Language Dataset
Viaarxiv icon

Visual Keyword Spotting with Attention

Oct 29, 2021
K R Prajwal, Liliane Momeni, Triantafyllos Afouras, Andrew Zisserman

Figure 1 for Visual Keyword Spotting with Attention
Figure 2 for Visual Keyword Spotting with Attention
Figure 3 for Visual Keyword Spotting with Attention
Figure 4 for Visual Keyword Spotting with Attention
Viaarxiv icon

Sub-word Level Lip Reading With Visual Attention

Oct 14, 2021
Prajwal K R, Triantafyllos Afouras, Andrew Zisserman

Figure 1 for Sub-word Level Lip Reading With Visual Attention
Figure 2 for Sub-word Level Lip Reading With Visual Attention
Figure 3 for Sub-word Level Lip Reading With Visual Attention
Figure 4 for Sub-word Level Lip Reading With Visual Attention
Viaarxiv icon

Aligning Subtitles in Sign Language Videos

May 06, 2021
Hannah Bull, Triantafyllos Afouras, Gül Varol, Samuel Albanie, Liliane Momeni, Andrew Zisserman

Figure 1 for Aligning Subtitles in Sign Language Videos
Figure 2 for Aligning Subtitles in Sign Language Videos
Figure 3 for Aligning Subtitles in Sign Language Videos
Figure 4 for Aligning Subtitles in Sign Language Videos
Viaarxiv icon

Self-supervised object detection from audio-visual correspondence

Apr 13, 2021
Triantafyllos Afouras, Yuki M. Asano, Francois Fagan, Andrea Vedaldi, Florian Metze

Figure 1 for Self-supervised object detection from audio-visual correspondence
Figure 2 for Self-supervised object detection from audio-visual correspondence
Figure 3 for Self-supervised object detection from audio-visual correspondence
Figure 4 for Self-supervised object detection from audio-visual correspondence
Viaarxiv icon