Alert button
Picture for Victor Escorcia

Victor Escorcia

Alert button

SOS! Self-supervised Learning Over Sets Of Handled Objects In Egocentric Action Recognition

Add code
Bookmark button
Alert button
Apr 10, 2022
Victor Escorcia, Ricardo Guerrero, Xiatian Zhu, Brais Martinez

Figure 1 for SOS! Self-supervised Learning Over Sets Of Handled Objects In Egocentric Action Recognition
Figure 2 for SOS! Self-supervised Learning Over Sets Of Handled Objects In Egocentric Action Recognition
Figure 3 for SOS! Self-supervised Learning Over Sets Of Handled Objects In Egocentric Action Recognition
Figure 4 for SOS! Self-supervised Learning Over Sets Of Handled Objects In Egocentric Action Recognition
Viaarxiv icon

OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context

Add code
Bookmark button
Alert button
Feb 14, 2022
Merey Ramazanova, Victor Escorcia, Fabian Caba Heilbron, Chen Zhao, Bernard Ghanem

Figure 1 for OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context
Figure 2 for OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context
Figure 3 for OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context
Figure 4 for OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context
Viaarxiv icon

vCLIMB: A Novel Video Class Incremental Learning Benchmark

Add code
Bookmark button
Alert button
Jan 23, 2022
Andrés Villa, Kumail Alhamoud, Juan León Alcázar, Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem

Figure 1 for vCLIMB: A Novel Video Class Incremental Learning Benchmark
Figure 2 for vCLIMB: A Novel Video Class Incremental Learning Benchmark
Figure 3 for vCLIMB: A Novel Video Class Incremental Learning Benchmark
Figure 4 for vCLIMB: A Novel Video Class Incremental Learning Benchmark
Viaarxiv icon

TNT: Text-Conditioned Network with Transductive Inference for Few-Shot Video Classification

Add code
Bookmark button
Alert button
Jun 21, 2021
Andrés Villa, Juan-Manuel Perez-Rua, Vladimir Araujo, Juan Carlos Niebles, Victor Escorcia, Alvaro Soto

Figure 1 for TNT: Text-Conditioned Network with Transductive Inference for Few-Shot Video Classification
Figure 2 for TNT: Text-Conditioned Network with Transductive Inference for Few-Shot Video Classification
Figure 3 for TNT: Text-Conditioned Network with Transductive Inference for Few-Shot Video Classification
Figure 4 for TNT: Text-Conditioned Network with Transductive Inference for Few-Shot Video Classification
Viaarxiv icon

Boundary-sensitive Pre-training for Temporal Localization in Videos

Add code
Bookmark button
Alert button
Nov 24, 2020
Mengmeng Xu, Juan-Manuel Perez-Rua, Victor Escorcia, Brais Martinez, Xiatian Zhu, Li Zhang, Bernard Ghanem, Tao Xiang

Figure 1 for Boundary-sensitive Pre-training for Temporal Localization in Videos
Figure 2 for Boundary-sensitive Pre-training for Temporal Localization in Videos
Figure 3 for Boundary-sensitive Pre-training for Temporal Localization in Videos
Figure 4 for Boundary-sensitive Pre-training for Temporal Localization in Videos
Viaarxiv icon

Egocentric Action Recognition by Video Attention and Temporal Context

Add code
Bookmark button
Alert button
Jul 03, 2020
Juan-Manuel Perez-Rua, Antoine Toisoul, Brais Martinez, Victor Escorcia, Li Zhang, Xiatian Zhu, Tao Xiang

Figure 1 for Egocentric Action Recognition by Video Attention and Temporal Context
Figure 2 for Egocentric Action Recognition by Video Attention and Temporal Context
Figure 3 for Egocentric Action Recognition by Video Attention and Temporal Context
Figure 4 for Egocentric Action Recognition by Video Attention and Temporal Context
Viaarxiv icon

Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention

Add code
Bookmark button
Alert button
Apr 02, 2020
Juan-Manuel Perez-Rua, Brais Martinez, Xiatian Zhu, Antoine Toisoul, Victor Escorcia, Tao Xiang

Figure 1 for Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention
Figure 2 for Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention
Figure 3 for Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention
Figure 4 for Knowing What, Where and When to Look: Efficient Video Action Modeling with Attention
Viaarxiv icon

Temporal Localization of Moments in Video Collections with Natural Language

Add code
Bookmark button
Alert button
Jul 30, 2019
Victor Escorcia, Mattia Soldan, Josef Sivic, Bernard Ghanem, Bryan Russell

Figure 1 for Temporal Localization of Moments in Video Collections with Natural Language
Figure 2 for Temporal Localization of Moments in Video Collections with Natural Language
Figure 3 for Temporal Localization of Moments in Video Collections with Natural Language
Figure 4 for Temporal Localization of Moments in Video Collections with Natural Language
Viaarxiv icon