Alert button
Picture for Matt Feiszli

Matt Feiszli

Alert button

ICON: Incremental CONfidence for Joint Pose and Radiance Field Optimization

Jan 17, 2024
Weiyao Wang, Pierre Gleize, Hao Tang, Xingyu Chen, Kevin J Liang, Matt Feiszli

Viaarxiv icon

NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation

Aug 29, 2023
Tim Meinhardt, Matt Feiszli, Yuchen Fan, Laura Leal-Taixe, Rakesh Ranjan

Figure 1 for NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation
Figure 2 for NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation
Figure 3 for NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation
Figure 4 for NOVIS: A Case for End-to-End Near-Online Video Instance Segmentation
Viaarxiv icon

SiLK -- Simple Learned Keypoints

Apr 12, 2023
Pierre Gleize, Weiyao Wang, Matt Feiszli

Figure 1 for SiLK -- Simple Learned Keypoints
Figure 2 for SiLK -- Simple Learned Keypoints
Figure 3 for SiLK -- Simple Learned Keypoints
Figure 4 for SiLK -- Simple Learned Keypoints
Viaarxiv icon

MINOTAUR: Multi-task Video Grounding From Multimodal Queries

Feb 16, 2023
Raghav Goyal, Effrosyni Mavroudi, Xitong Yang, Sainbayar Sukhbaatar, Leonid Sigal, Matt Feiszli, Lorenzo Torresani, Du Tran

Figure 1 for MINOTAUR: Multi-task Video Grounding From Multimodal Queries
Figure 2 for MINOTAUR: Multi-task Video Grounding From Multimodal Queries
Figure 3 for MINOTAUR: Multi-task Video Grounding From Multimodal Queries
Figure 4 for MINOTAUR: Multi-task Video Grounding From Multimodal Queries
Viaarxiv icon

EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset

Jan 11, 2023
Hao Tang, Kevin Liang, Kristen Grauman, Matt Feiszli, Weiyao Wang

Figure 1 for EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset
Figure 2 for EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset
Figure 3 for EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset
Figure 4 for EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset
Viaarxiv icon

Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity

Apr 12, 2022
Weiyao Wang, Matt Feiszli, Heng Wang, Jitendra Malik, Du Tran

Figure 1 for Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity
Figure 2 for Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity
Figure 3 for Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity
Figure 4 for Open-World Instance Segmentation: Exploiting Pseudo Ground Truth From Learned Pairwise Affinity
Viaarxiv icon

GEB+: A benchmark for generic event boundary captioning, grounding and text-based retrieval

Apr 10, 2022
Yuxuan Wang, Difei Gao, Licheng Yu, Stan Weixian Lei, Matt Feiszli, Mike Zheng Shou

Figure 1 for GEB+: A benchmark for generic event boundary captioning, grounding and text-based retrieval
Figure 2 for GEB+: A benchmark for generic event boundary captioning, grounding and text-based retrieval
Figure 3 for GEB+: A benchmark for generic event boundary captioning, grounding and text-based retrieval
Figure 4 for GEB+: A benchmark for generic event boundary captioning, grounding and text-based retrieval
Viaarxiv icon

PyTorchVideo: A Deep Learning Library for Video Understanding

Nov 18, 2021
Haoqi Fan, Tullie Murrell, Heng Wang, Kalyan Vasudev Alwala, Yanghao Li, Yilei Li, Bo Xiong, Nikhila Ravi, Meng Li, Haichuan Yang, Jitendra Malik, Ross Girshick, Matt Feiszli, Aaron Adcock, Wan-Yen Lo, Christoph Feichtenhofer

Figure 1 for PyTorchVideo: A Deep Learning Library for Video Understanding
Figure 2 for PyTorchVideo: A Deep Learning Library for Video Understanding
Figure 3 for PyTorchVideo: A Deep Learning Library for Video Understanding
Viaarxiv icon

Searching for Two-Stream Models in Multivariate Space for Video Recognition

Aug 30, 2021
Xinyu Gong, Heng Wang, Zheng Shou, Matt Feiszli, Zhangyang Wang, Zhicheng Yan

Figure 1 for Searching for Two-Stream Models in Multivariate Space for Video Recognition
Figure 2 for Searching for Two-Stream Models in Multivariate Space for Video Recognition
Figure 3 for Searching for Two-Stream Models in Multivariate Space for Video Recognition
Figure 4 for Searching for Two-Stream Models in Multivariate Space for Video Recognition
Viaarxiv icon