Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
ComQA:Compositional Question Answering via Hierarchical Graph Neural Networks

Jan 16, 2021
Bingning Wang, Ting Yao, Weipeng Chen, Jingfang Xu, Xiaochuan Wang

* Accepted by WWW2021 

  Access Paper or Ask Questions

Joint Contrastive Learning with Infinite Possibilities

Oct 10, 2020
Qi Cai, Yu Wang, Yingwei Pan, Ting Yao, Tao Mei

* NeurIPS 2020 Spotlight; Code is publicly available at: https://github.com/caiqi/Joint-Contrastive-Learning 

  Access Paper or Ask Questions

Learning to Localize Actions from Moments

Aug 31, 2020
Fuchen Long, Ting Yao, Zhaofan Qiu, Xinmei Tian, Jiebo Luo, Tao Mei

* ECCV 2020 Oral; The source code and data are available at: https://github.com/FuchenUSTC/AherNet 

  Access Paper or Ask Questions

SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning

Aug 03, 2020
Ting Yao, Yiheng Zhang, Zhaofan Qiu, Yingwei Pan, Tao Mei


  Access Paper or Ask Questions

Pre-training for Video Captioning Challenge 2020 Summary

Jul 27, 2020
Yingwei Pan, Jun Xu, Yehao Li, Ting Yao, Tao Mei


  Access Paper or Ask Questions

Single Shot Video Object Detector

Jul 07, 2020
Jiajun Deng, Yingwei Pan, Ting Yao, Wengang Zhou, Houqiang Li, Tao Mei

* Accepted by IEEE Transactions on Multimedia; The code is available at \url{https://github.com/ddjiajun/SSVD

  Access Paper or Ask Questions

Auto-captions on GIF: A Large-scale Video-sentence Dataset for Vision-language Pre-training

Jul 05, 2020
Yingwei Pan, Yehao Li, Jianjie Luo, Jun Xu, Ting Yao, Tao Mei

* The Auto-captions on GIF dataset is available at \url{http://www.auto-video-captions.top/2020/dataset

  Access Paper or Ask Questions

ReCO: A Large Scale Chinese Reading Comprehension Dataset on Opinion

Jun 22, 2020
BingningWang, Ting Yao, Qi Zhang, Jingfang Xu, Xiaochuan Wang

* AAAI-2020 camera ready 

  Access Paper or Ask Questions

Learning a Unified Sample Weighting Network for Object Detection

Jun 14, 2020
Qi Cai, Yingwei Pan, Yu Wang, Jingen Liu, Ting Yao, Tao Mei

* CVPR 2020; The source code and model are publicly available at: \url{https://github.com/caiqi/sample-weighting-network

  Access Paper or Ask Questions

Transferring and Regularizing Prediction for Semantic Segmentation

Jun 11, 2020
Yiheng Zhang, Zhaofan Qiu, Ting Yao, Chong-Wah Ngo, Dong Liu, Tao Mei

* CVPR 2020 

  Access Paper or Ask Questions

Exploring Category-Agnostic Clusters for Open-Set Domain Adaptation

Jun 11, 2020
Yingwei Pan, Ting Yao, Yehao Li, Chong-Wah Ngo, Tao Mei

* CVPR 2020 

  Access Paper or Ask Questions

A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction

May 11, 2020
Yilin Niu, Fangkai Jiao, Mantong Zhou, Ting Yao, Jingfang Xu, Minlie Huang

* 12 pages, accepted by ACL2020 

  Access Paper or Ask Questions

X-Linear Attention Networks for Image Captioning

Mar 31, 2020
Yingwei Pan, Ting Yao, Yehao Li, Tao Mei

* CVPR 2020; The source code and model are publicly available at: https://github.com/Panda-Peter/image-captioning 

  Access Paper or Ask Questions

Long Short-Term Relation Networks for Video Action Detection

Mar 31, 2020
Dong Li, Ting Yao, Zhaofan Qiu, Houqiang Li, Tao Mei

* Accepted as a full paper for ACMMM 2019 

  Access Paper or Ask Questions

Vision and Language: from Visual Perception to Content Creation

Dec 26, 2019
Tao Mei, Wei Zhang, Ting Yao


  Access Paper or Ask Questions

Multi-Source Domain Adaptation and Semi-Supervised Domain Adaptation with Focus on Visual Domain Adaptation Challenge 2019

Oct 14, 2019
Yingwei Pan, Yehao Li, Qi Cai, Yang Chen, Ting Yao

* Rank 1 in Multi-Source Domain Adaptation of Visual Domain Adaptation Challenge (VisDA-2019). Source code of each task: https://github.com/Panda-Peter/visda2019-multisource and https://github.com/Panda-Peter/visda2019-semisupervised 

  Access Paper or Ask Questions

Scheduled Differentiable Architecture Search for Visual Recognition

Sep 23, 2019
Zhaofan Qiu, Ting Yao, Yiheng Zhang, Yongdong Zhang, Tao Mei


  Access Paper or Ask Questions

Hierarchy Parsing for Image Captioning

Sep 10, 2019
Ting Yao, Yingwei Pan, Yehao Li, Tao Mei

* ICCV 2019 

  Access Paper or Ask Questions

Deep Metric Learning with Density Adaptivity

Sep 09, 2019
Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, Tao Mei

* Accepted by IEEE Transactions on Multimedia 

  Access Paper or Ask Questions

Gaussian Temporal Awareness Networks for Action Localization

Sep 09, 2019
Fuchen Long, Ting Yao, Zhaofan Qiu, Xinmei Tian, Jiebo Luo, Tao Mei

* CVPR 2019 Oral 

  Access Paper or Ask Questions

Adversarial Examples with Difficult Common Words for Paraphrase Identification

Sep 06, 2019
Zhouxing Shi, Minlie Huang, Ting Yao, Jingfang Xu


  Access Paper or Ask Questions

Customizable Architecture Search for Semantic Segmentation

Aug 26, 2019
Yiheng Zhang, Zhaofan Qiu, Jingen Liu, Ting Yao, Dong Liu, Tao Mei

* CVPR 2019 

  Access Paper or Ask Questions

Mocycle-GAN: Unpaired Video-to-Video Translation

Aug 26, 2019
Yang Chen, Yingwei Pan, Ting Yao, Xinmei Tian, Tao Mei

* Accepted as a full paper for ACMMM 2019 

  Access Paper or Ask Questions

Relation Distillation Networks for Video Object Detection

Aug 26, 2019
Jiajun Deng, Yingwei Pan, Ting Yao, Wengang Zhou, Houqiang Li, Tao Mei

* ICCV 2019 

  Access Paper or Ask Questions

daBNN: A Super Fast Inference Framework for Binary Neural Networks on ARM devices

Aug 16, 2019
Jianhao Zhang, Yingwei Pan, Ting Yao, He Zhao, Tao Mei

* Accepted by 2019 ACMMM Open Source Software Competition. Source code: https://github.com/JDAI-CV/dabnn 

  Access Paper or Ask Questions

Convolutional Auto-encoding of Sentence Topics for Image Paragraph Generation

Aug 01, 2019
Jing Wang, Yingwei Pan, Ting Yao, Jinhui Tang, Tao Mei

* IJCAI 2019 

  Access Paper or Ask Questions

vireoJD-MM at Activity Detection in Extended Videos

Jun 20, 2019
Fuchen Long, Qi Cai, Zhaofan Qiu, Zhijian Hou, Yingwei Pan, Ting Yao, Chong-Wah Ngo


  Access Paper or Ask Questions