Alert button

"Text": models, code, and papers
Alert button

Towards Accurate Text-based Image Captioning with Content Diversity Exploration

Apr 23, 2021
Guanghui Xu, Shuaicheng Niu, Mingkui Tan, Yucheng Luo, Qing Du, Qi Wu

Figure 1 for Towards Accurate Text-based Image Captioning with Content Diversity Exploration
Figure 2 for Towards Accurate Text-based Image Captioning with Content Diversity Exploration
Figure 3 for Towards Accurate Text-based Image Captioning with Content Diversity Exploration
Figure 4 for Towards Accurate Text-based Image Captioning with Content Diversity Exploration
Viaarxiv icon

Overcoming Language Disparity in Online Content Classification with Multimodal Learning

May 19, 2022
Gaurav Verma, Rohit Mujumdar, Zijie J. Wang, Munmun De Choudhury, Srijan Kumar

Figure 1 for Overcoming Language Disparity in Online Content Classification with Multimodal Learning
Figure 2 for Overcoming Language Disparity in Online Content Classification with Multimodal Learning
Figure 3 for Overcoming Language Disparity in Online Content Classification with Multimodal Learning
Figure 4 for Overcoming Language Disparity in Online Content Classification with Multimodal Learning
Viaarxiv icon

CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training

Jun 11, 2020
Qipeng Guo, Zhijing Jin, Xipeng Qiu, Weinan Zhang, David Wipf, Zheng Zhang

Figure 1 for CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training
Figure 2 for CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training
Figure 3 for CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training
Figure 4 for CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training
Viaarxiv icon

RTIC: Residual Learning for Text and Image Composition using Graph Convolutional Network

Apr 08, 2021
Minchul Shin, Yoonjae Cho, Byungsoo Ko, Geonmo Gu

Figure 1 for RTIC: Residual Learning for Text and Image Composition using Graph Convolutional Network
Figure 2 for RTIC: Residual Learning for Text and Image Composition using Graph Convolutional Network
Figure 3 for RTIC: Residual Learning for Text and Image Composition using Graph Convolutional Network
Figure 4 for RTIC: Residual Learning for Text and Image Composition using Graph Convolutional Network
Viaarxiv icon

Learning video retrieval models with relevance-aware online mining

Mar 16, 2022
Alex Falcon, Giuseppe Serra, Oswald Lanz

Figure 1 for Learning video retrieval models with relevance-aware online mining
Figure 2 for Learning video retrieval models with relevance-aware online mining
Figure 3 for Learning video retrieval models with relevance-aware online mining
Figure 4 for Learning video retrieval models with relevance-aware online mining
Viaarxiv icon

A Survey of Natural Language Generation

Dec 22, 2021
Chenhe Dong, Yinghui Li, Haifan Gong, Miaoxin Chen, Junxin Li, Ying Shen, Min Yang

Figure 1 for A Survey of Natural Language Generation
Figure 2 for A Survey of Natural Language Generation
Figure 3 for A Survey of Natural Language Generation
Figure 4 for A Survey of Natural Language Generation
Viaarxiv icon

BasqueParl: A Bilingual Corpus of Basque Parliamentary Transcriptions

May 03, 2022
Nayla Escribano, Jon Ander González, Julen Orbegozo-Terradillos, Ainara Larrondo-Ureta, Simón Peña-Fernández, Olatz Perez-de-Viñaspre, Rodrigo Agerri

Figure 1 for BasqueParl: A Bilingual Corpus of Basque Parliamentary Transcriptions
Figure 2 for BasqueParl: A Bilingual Corpus of Basque Parliamentary Transcriptions
Figure 3 for BasqueParl: A Bilingual Corpus of Basque Parliamentary Transcriptions
Figure 4 for BasqueParl: A Bilingual Corpus of Basque Parliamentary Transcriptions
Viaarxiv icon

What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?

Apr 12, 2022
Thomas Wang, Adam Roberts, Daniel Hesslow, Teven Le Scao, Hyung Won Chung, Iz Beltagy, Julien Launay, Colin Raffel

Figure 1 for What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
Figure 2 for What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
Figure 3 for What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
Figure 4 for What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?
Viaarxiv icon

More Control for Free! Image Synthesis with Semantic Diffusion Guidance

Dec 14, 2021
Xihui Liu, Dong Huk Park, Samaneh Azadi, Gong Zhang, Arman Chopikyan, Yuxiao Hu, Humphrey Shi, Anna Rohrbach, Trevor Darrell

Figure 1 for More Control for Free! Image Synthesis with Semantic Diffusion Guidance
Figure 2 for More Control for Free! Image Synthesis with Semantic Diffusion Guidance
Figure 3 for More Control for Free! Image Synthesis with Semantic Diffusion Guidance
Figure 4 for More Control for Free! Image Synthesis with Semantic Diffusion Guidance
Viaarxiv icon

On the Lack of Robust Interpretability of Neural Text Classifiers

Jun 08, 2021
Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi

Figure 1 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 2 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 3 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 4 for On the Lack of Robust Interpretability of Neural Text Classifiers
Viaarxiv icon