Alert button
Picture for Qing Ping

Qing Ping

Alert button

Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications

Add code
Bookmark button
Alert button
Jun 05, 2023
Han Xie, Da Zheng, Jun Ma, Houyu Zhang, Vassilis N. Ioannidis, Xiang Song, Qing Ping, Sheng Wang, Carl Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 2 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 3 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Figure 4 for Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Viaarxiv icon

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

Add code
Bookmark button
Alert button
Mar 10, 2023
Qian Jiang, Changyou Chen, Han Zhao, Liqun Chen, Qing Ping, Son Dinh Tran, Yi Xu, Belinda Zeng, Trishul Chilimbi

Figure 1 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 2 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 3 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Figure 4 for Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning
Viaarxiv icon

A Multi-level Alignment Training Scheme for Video-and-Language Grounding

Add code
Bookmark button
Alert button
Apr 26, 2022
Yubo Zhang, Feiyang Niu, Qing Ping, Govind Thattai

Figure 1 for A Multi-level Alignment Training Scheme for Video-and-Language Grounding
Figure 2 for A Multi-level Alignment Training Scheme for Video-and-Language Grounding
Figure 3 for A Multi-level Alignment Training Scheme for Video-and-Language Grounding
Figure 4 for A Multi-level Alignment Training Scheme for Video-and-Language Grounding
Viaarxiv icon

Privacy Preserving Visual Question Answering

Add code
Bookmark button
Alert button
Feb 15, 2022
Cristian-Paul Bara, Qing Ping, Abhinav Mathur, Govind Thattai, Rohith MV, Gaurav S. Sukhatme

Figure 1 for Privacy Preserving Visual Question Answering
Figure 2 for Privacy Preserving Visual Question Answering
Figure 3 for Privacy Preserving Visual Question Answering
Figure 4 for Privacy Preserving Visual Question Answering
Viaarxiv icon

A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering

Add code
Bookmark button
Alert button
Jan 14, 2022
Feng Gao, Qing Ping, Govind Thattai, Aishwarya Reganti, Ying Nian Wu, Prem Natarajan

Figure 1 for A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering
Figure 2 for A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering
Figure 3 for A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering
Figure 4 for A Thousand Words Are Worth More Than a Picture: Natural Language-Centric Outside-Knowledge Visual Question Answering
Viaarxiv icon

Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation

Add code
Bookmark button
Alert button
May 24, 2021
Tao Tu, Qing Ping, Govind Thattai, Gokhan Tur, Prem Natarajan

Figure 1 for Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
Figure 2 for Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
Figure 3 for Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
Figure 4 for Learning Better Visual Dialog Agents with Pretrained Visual-Linguistic Representation
Viaarxiv icon

Interactive Teaching for Conversational AI

Add code
Bookmark button
Alert button
Dec 02, 2020
Qing Ping, Feiyang Niu, Govind Thattai, Joel Chengottusseriyil, Qiaozi Gao, Aishwarya Reganti, Prashanth Rajagopal, Gokhan Tur, Dilek Hakkani-Tur, Prem Nataraja

Figure 1 for Interactive Teaching for Conversational AI
Figure 2 for Interactive Teaching for Conversational AI
Figure 3 for Interactive Teaching for Conversational AI
Figure 4 for Interactive Teaching for Conversational AI
Viaarxiv icon

Adversarial Code Learning for Image Generation

Add code
Bookmark button
Alert button
Jan 30, 2020
Jiangbo Yuan, Bing Wu, Wanying Ding, Qing Ping, Zhendong Yu

Figure 1 for Adversarial Code Learning for Image Generation
Figure 2 for Adversarial Code Learning for Image Generation
Figure 3 for Adversarial Code Learning for Image Generation
Figure 4 for Adversarial Code Learning for Image Generation
Viaarxiv icon

Convolutional Quantum-Like Language Model with Mutual-Attention for Product Rating Prediction

Add code
Bookmark button
Alert button
Dec 25, 2019
Qing Ping, Chaomei Chen

Figure 1 for Convolutional Quantum-Like Language Model with Mutual-Attention for Product Rating Prediction
Figure 2 for Convolutional Quantum-Like Language Model with Mutual-Attention for Product Rating Prediction
Figure 3 for Convolutional Quantum-Like Language Model with Mutual-Attention for Product Rating Prediction
Figure 4 for Convolutional Quantum-Like Language Model with Mutual-Attention for Product Rating Prediction
Viaarxiv icon

Fashion-AttGAN: Attribute-Aware Fashion Editing with Multi-Objective GAN

Add code
Bookmark button
Alert button
Apr 20, 2019
Qing Ping, Bing Wu, Wanying Ding, Jiangbo Yuan

Figure 1 for Fashion-AttGAN: Attribute-Aware Fashion Editing with Multi-Objective GAN
Viaarxiv icon