Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Shih-Fu Chang

Multimodal Clustering Networks for Self-supervised Learning from Unlabeled Videos


May 05, 2021
Brian Chen, Andrew Rouditchenko, Kevin Duarte, Hilde Kuehne, Samuel Thomas, Angie Boggust, Rameswar Panda, Brian Kingsbury, Rogerio Feris, David Harwath, James Glass, Michael Picheny, Shih-Fu Chang


  Access Paper or Ask Questions

VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text


Apr 22, 2021
Hassan Akbari, Linagzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, Boqing Gong


  Access Paper or Ask Questions

Meta Faster R-CNN: Towards Accurate Few-Shot Object Detection with Attentive Feature Alignment


Apr 15, 2021
Guangxing Han, Shiyuan Huang, Jiawei Ma, Yicheng He, Shih-Fu Chang

* 14 pages 

  Access Paper or Ask Questions

Co-Grounding Networks with Semantic Attention for Referring Expression Comprehension in Videos


Mar 23, 2021
Sijie Song, Xudong Lin, Jiaying Liu, Zongming Guo, Shih-Fu Chang

* Accepted to CVPR2021. The project page is at https://sijiesong.github.io/co-grounding 

  Access Paper or Ask Questions

VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs


Jan 29, 2021
Xudong Lin, Gedas Bertasius, Jue Wang, Shih-Fu Chang, Devi Parikh, Lorenzo Torresani

* Work in progress 

  Access Paper or Ask Questions

Vx2Text: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs


Jan 28, 2021
Xudong Lin, Gedas Bertasius, Jue Wang, Shih-Fu Chang, Devi Parikh, Lorenzo Torresani

* Work in progress 

  Access Paper or Ask Questions

Task-Adaptive Negative Class Envision for Few-Shot Open-Set Recognition


Dec 24, 2020
Shiyuan Huang, Jiawei Ma, Guangxing Han, Shih-Fu Chang


  Access Paper or Ask Questions

Open-Vocabulary Object Detection Using Captions


Nov 20, 2020
Alireza Zareian, Kevin Dela Rosa, Derek Hao Hu, Shih-Fu Chang


  Access Paper or Ask Questions

Neuro-Symbolic Representations for Video Captioning: A Case for Leveraging Inductive Biases for Vision and Language


Nov 18, 2020
Hassan Akbari, Hamid Palangi, Jianwei Yang, Sudha Rao, Asli Celikyilmaz, Roland Fernandez, Paul Smolensky, Jianfeng Gao, Shih-Fu Chang


  Access Paper or Ask Questions

Weakly-supervised VisualBERT: Pre-training without Parallel Images and Captions


Oct 24, 2020
Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang


  Access Paper or Ask Questions

Uncertainty-Aware Few-Shot Image Classification


Oct 09, 2020
Zhizheng Zhang, Cuiling Lan, Wenjun Zeng, Zhibo Chen, Shih-Fu Chang


  Access Paper or Ask Questions

Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding


Sep 03, 2020
Long Chen, Wenbo Ma, Jun Xiao, Hanwang Zhang, Wei Liu, Shih-Fu Chang


  Access Paper or Ask Questions

Analogical Reasoning for Visually Grounded Language Acquisition


Jul 22, 2020
Bo Wu, Haoyu Qin, Alireza Zareian, Carl Vondrick, Shih-Fu Chang

* 12 pages 

  Access Paper or Ask Questions

COVID-19 Literature Knowledge Graph Construction and Drug Repurposing Report Generation


Jul 06, 2020
Qingyun Wang, Manling Li, Xuan Wang, Nikolaus Parulian, Guangxing Han, Jiawei Ma, Jingxuan Tu, Ying Lin, Haoran Zhang, Weili Liu, Aabhas Chauhan, Yingjun Guan, Bangzheng Li, Ruisong Li, Xiangchen Song, Heng Ji, Jiawei Han, Shih-Fu Chang, James Pustejovsky, Jasmine Rah, David Liem, Ahmed Elsayed, Martha Palmer, Clare Voss, Cynthia Schneider, Boyan Onyshkevych

* 11 pages, submitted to ACL 2020 Workshop on Natural Language Processing for COVID-19 (NLP-COVID), for resources see http://blender.cs.illinois.edu/covid19/ 

  Access Paper or Ask Questions

Learning Visual Commonsense for Robust Scene Graph Generation


Jun 17, 2020
Alireza Zareian, Haoxuan You, Zhecan Wang, Shih-Fu Chang


  Access Paper or Ask Questions

Deep Learning Guided Building Reconstruction from Satellite Imagery-derived Point Clouds


May 19, 2020
Bo Xu, Xu Zhang, Zhixin Li, Matt Leotta, Shih-Fu Chang, Jie Shan


  Access Paper or Ask Questions

Cross-media Structured Common Space for Multimedia Event Extraction


May 05, 2020
Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji, Shih-Fu Chang

* Accepted as an oral paper at ACL 2020 

  Access Paper or Ask Questions

Unifying Specialist Image Embedding into Universal Image Embedding


Mar 08, 2020
Yang Feng, Futang Peng, Xu Zhang, Wei Zhu, Shanfeng Zhang, Howard Zhou, Zhen Li, Tom Duerig, Shih-Fu Chang, Jiebo Luo


  Access Paper or Ask Questions

Training with Streaming Annotation


Feb 11, 2020
Tongtao Zhang, Heng Ji, Shih-Fu Chang, Marjorie Freedman


  Access Paper or Ask Questions

Weakly Supervised Visual Semantic Parsing


Jan 08, 2020
Alireza Zareian, Svebor Karaman, Shih-Fu Chang


  Access Paper or Ask Questions

Bridging Knowledge Graphs to Generate Scene Graphs


Jan 07, 2020
Alireza Zareian, Svebor Karaman, Shih-Fu Chang


  Access Paper or Ask Questions

General Partial Label Learning via Dual Bipartite Graph Autoencoder


Jan 05, 2020
Brian Chen, Bo Wu, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang

* 8 pages 

  Access Paper or Ask Questions

Flow-Distilled IP Two-Stream Networks for Compressed Video Action Recognition


Dec 12, 2019
Shiyuan Huang, Xudong Lin, Svebor Karaman, Shih-Fu Chang


  Access Paper or Ask Questions

Flow-Distilled IP Two-Stream Networks for Compressed Video ActionRecognition


Dec 10, 2019
Shiyuan Huang, Xudong Lin, Svebor Karaman, Shih-Fu Chang


  Access Paper or Ask Questions

Learning to Learn Words from Narrated Video


Nov 25, 2019
Dídac Surís, Dave Epstein, Heng Ji, Shih-Fu Chang, Carl Vondrick

* 11 pages, 11 figures 

  Access Paper or Ask Questions

LPAT: Learning to Predict Adaptive Threshold for Weakly-supervised Temporal Action Localization


Oct 25, 2019
Xudong Lin, Zheng Shou, Shih-Fu Chang

* Work in progress 

  Access Paper or Ask Questions

Context-Gated Convolution


Oct 22, 2019
Xudong Lin, Lin Ma, Wei Liu, Shih-Fu Chang

* Work in progress 

  Access Paper or Ask Questions