Alert button
Picture for Mohammadreza Zolfaghari

Mohammadreza Zolfaghari

Alert button

University of Freiburg

CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations

Add code
Bookmark button
Alert button
Sep 30, 2021
Mohammadreza Zolfaghari, Yi Zhu, Peter Gehler, Thomas Brox

Figure 1 for CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
Figure 2 for CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
Figure 3 for CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
Figure 4 for CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
Viaarxiv icon

A Comprehensive Study of Deep Video Action Recognition

Add code
Bookmark button
Alert button
Dec 11, 2020
Yi Zhu, Xinyu Li, Chunhui Liu, Mohammadreza Zolfaghari, Yuanjun Xiong, Chongruo Wu, Zhi Zhang, Joseph Tighe, R. Manmatha, Mu Li

Figure 1 for A Comprehensive Study of Deep Video Action Recognition
Figure 2 for A Comprehensive Study of Deep Video Action Recognition
Figure 3 for A Comprehensive Study of Deep Video Action Recognition
Figure 4 for A Comprehensive Study of Deep Video Action Recognition
Viaarxiv icon

COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning

Add code
Bookmark button
Alert button
Nov 01, 2020
Simon Ging, Mohammadreza Zolfaghari, Hamed Pirsiavash, Thomas Brox

Figure 1 for COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning
Figure 2 for COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning
Figure 3 for COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning
Figure 4 for COOT: Cooperative Hierarchical Transformer for Video-Text Representation Learning
Viaarxiv icon

Multi-Variate Temporal GAN for Large Scale Video Generation

Add code
Bookmark button
Alert button
Apr 04, 2020
Andres Muñoz, Mohammadreza Zolfaghari, Max Argus, Thomas Brox

Figure 1 for Multi-Variate Temporal GAN for Large Scale Video Generation
Figure 2 for Multi-Variate Temporal GAN for Large Scale Video Generation
Figure 3 for Multi-Variate Temporal GAN for Large Scale Video Generation
Figure 4 for Multi-Variate Temporal GAN for Large Scale Video Generation
Viaarxiv icon

Learning Representations for Predicting Future Activities

Add code
Bookmark button
Alert button
May 09, 2019
Mohammadreza Zolfaghari, Özgün Çiçek, Syed Mohsin Ali, Farzaneh Mahdisoltani, Can Zhang, Thomas Brox

Figure 1 for Learning Representations for Predicting Future Activities
Figure 2 for Learning Representations for Predicting Future Activities
Figure 3 for Learning Representations for Predicting Future Activities
Figure 4 for Learning Representations for Predicting Future Activities
Viaarxiv icon

ECO: Efficient Convolutional Network for Online Video Understanding

Add code
Bookmark button
Alert button
May 07, 2018
Mohammadreza Zolfaghari, Kamaljeet Singh, Thomas Brox

Figure 1 for ECO: Efficient Convolutional Network for Online Video Understanding
Figure 2 for ECO: Efficient Convolutional Network for Online Video Understanding
Figure 3 for ECO: Efficient Convolutional Network for Online Video Understanding
Figure 4 for ECO: Efficient Convolutional Network for Online Video Understanding
Viaarxiv icon

Orientation-boosted Voxel Nets for 3D Object Recognition

Add code
Bookmark button
Alert button
Oct 19, 2017
Nima Sedaghat, Mohammadreza Zolfaghari, Ehsan Amiri, Thomas Brox

Figure 1 for Orientation-boosted Voxel Nets for 3D Object Recognition
Figure 2 for Orientation-boosted Voxel Nets for 3D Object Recognition
Figure 3 for Orientation-boosted Voxel Nets for 3D Object Recognition
Figure 4 for Orientation-boosted Voxel Nets for 3D Object Recognition
Viaarxiv icon

Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection

Add code
Bookmark button
Alert button
May 26, 2017
Mohammadreza Zolfaghari, Gabriel L. Oliveira, Nima Sedaghat, Thomas Brox

Figure 1 for Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection
Figure 2 for Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection
Figure 3 for Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection
Figure 4 for Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection
Viaarxiv icon

Hybrid Learning of Optical Flow and Next Frame Prediction to Boost Optical Flow in the Wild

Add code
Bookmark button
Alert button
Apr 07, 2017
Nima Sedaghat, Mohammadreza Zolfaghari, Thomas Brox

Figure 1 for Hybrid Learning of Optical Flow and Next Frame Prediction to Boost Optical Flow in the Wild
Figure 2 for Hybrid Learning of Optical Flow and Next Frame Prediction to Boost Optical Flow in the Wild
Figure 3 for Hybrid Learning of Optical Flow and Next Frame Prediction to Boost Optical Flow in the Wild
Figure 4 for Hybrid Learning of Optical Flow and Next Frame Prediction to Boost Optical Flow in the Wild
Viaarxiv icon