Alert button
Picture for Mathew Monfort

Mathew Monfort

Alert button

Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions

Add code
Bookmark button
Alert button
May 10, 2021
Mathew Monfort, SouYoung Jin, Alexander Liu, David Harwath, Rogerio Feris, James Glass, Aude Oliva

Figure 1 for Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions
Figure 2 for Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions
Figure 3 for Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions
Figure 4 for Spoken Moments: Learning Joint Audio-Visual Representations from Video Descriptions
Viaarxiv icon

We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos

Add code
Bookmark button
Alert button
Aug 12, 2020
Alex Andonian, Camilo Fosco, Mathew Monfort, Allen Lee, Rogerio Feris, Carl Vondrick, Aude Oliva

Figure 1 for We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos
Figure 2 for We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos
Figure 3 for We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos
Figure 4 for We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos
Viaarxiv icon

Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding

Add code
Bookmark button
Alert button
Nov 04, 2019
Mathew Monfort, Kandan Ramakrishnan, Alex Andonian, Barry A McNamara, Alex Lascelles, Bowen Pan, Quanfu Fan, Dan Gutfreund, Rogerio Feris, Aude Oliva

Figure 1 for Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Figure 2 for Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Figure 3 for Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Figure 4 for Multi-Moments in Time: Learning and Interpreting Models for Multi-Action Video Understanding
Viaarxiv icon

Reasoning About Human-Object Interactions Through Dual Attention Networks

Add code
Bookmark button
Alert button
Sep 10, 2019
Tete Xiao, Quanfu Fan, Dan Gutfreund, Mathew Monfort, Aude Oliva, Bolei Zhou

Figure 1 for Reasoning About Human-Object Interactions Through Dual Attention Networks
Figure 2 for Reasoning About Human-Object Interactions Through Dual Attention Networks
Figure 3 for Reasoning About Human-Object Interactions Through Dual Attention Networks
Figure 4 for Reasoning About Human-Object Interactions Through Dual Attention Networks
Viaarxiv icon

Multi-Agent Tensor Fusion for Contextual Trajectory Prediction

Add code
Bookmark button
Alert button
Apr 09, 2019
Tianyang Zhao, Yifei Xu, Mathew Monfort, Wongun Choi, Chris Baker, Yibiao Zhao, Yizhou Wang, Ying Nian Wu

Figure 1 for Multi-Agent Tensor Fusion for Contextual Trajectory Prediction
Figure 2 for Multi-Agent Tensor Fusion for Contextual Trajectory Prediction
Figure 3 for Multi-Agent Tensor Fusion for Contextual Trajectory Prediction
Figure 4 for Multi-Agent Tensor Fusion for Contextual Trajectory Prediction
Viaarxiv icon

Moments in Time Dataset: one million videos for event understanding

Add code
Bookmark button
Alert button
Jan 09, 2018
Mathew Monfort, Bolei Zhou, Sarah Adel Bargal, Alex Andonian, Tom Yan, Kandan Ramakrishnan, Lisa Brown, Quanfu Fan, Dan Gutfruend, Carl Vondrick, Aude Oliva

Figure 1 for Moments in Time Dataset: one million videos for event understanding
Figure 2 for Moments in Time Dataset: one million videos for event understanding
Figure 3 for Moments in Time Dataset: one million videos for event understanding
Figure 4 for Moments in Time Dataset: one million videos for event understanding
Viaarxiv icon

End to End Learning for Self-Driving Cars

Add code
Bookmark button
Alert button
Apr 25, 2016
Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, Karol Zieba

Figure 1 for End to End Learning for Self-Driving Cars
Figure 2 for End to End Learning for Self-Driving Cars
Figure 3 for End to End Learning for Self-Driving Cars
Figure 4 for End to End Learning for Self-Driving Cars
Viaarxiv icon