Alert button
Picture for Yulin Shen

Yulin Shen

Alert button

Geometry Attention Transformer with Position-aware LSTMs for Image Captioning

Oct 01, 2021
Chi Wang, Yulin Shen, Luping Ji

Figure 1 for Geometry Attention Transformer with Position-aware LSTMs for Image Captioning
Figure 2 for Geometry Attention Transformer with Position-aware LSTMs for Image Captioning
Figure 3 for Geometry Attention Transformer with Position-aware LSTMs for Image Captioning
Figure 4 for Geometry Attention Transformer with Position-aware LSTMs for Image Captioning

In recent years, transformer structures have been widely applied in image captioning with impressive performance. For good captioning results, the geometry and position relations of different visual objects are often thought of as crucial information. Aiming to further promote image captioning by transformers, this paper proposes an improved Geometry Attention Transformer (GAT) model. In order to further leverage geometric information, two novel geometry-aware architectures are designed respectively for the encoder and decoder in our GAT. Besides, this model includes the two work modules: 1) a geometry gate-controlled self-attention refiner, for explicitly incorporating relative spatial information into image region representations in encoding steps, and 2) a group of position-LSTMs, for precisely informing the decoder of relative word position in generating caption texts. The experiment comparisons on the datasets MS COCO and Flickr30K show that our GAT is efficient, and it could often outperform current state-of-the-art image captioning models.

* To be submitted 
Viaarxiv icon

When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions

Sep 05, 2021
Zixian Huang, Ao Wu, Yulin Shen, Gong Cheng, Yuzhong Qu

Figure 1 for When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions
Figure 2 for When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions
Figure 3 for When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions
Figure 4 for When Retriever-Reader Meets Scenario-Based Multiple-Choice Questions

Scenario-based question answering (SQA) requires retrieving and reading paragraphs from a large corpus to answer a question which is contextualized by a long scenario description. Since a scenario contains both keyphrases for retrieval and much noise, retrieval for SQA is extremely difficult. Moreover, it can hardly be supervised due to the lack of relevance labels of paragraphs for SQA. To meet the challenge, in this paper we propose a joint retriever-reader model called JEEVES where the retriever is implicitly supervised only using QA labels via a novel word weighting mechanism. JEEVES significantly outperforms a variety of strong baselines on multiple-choice questions in three SQA datasets.

* 10 pages, accepted to Findings of EMNLP 2021 
Viaarxiv icon

GeoSQA: A Benchmark for Scenario-based Question Answering in the Geography Domain at High School Level

Aug 20, 2019
Zixian Huang, Yulin Shen, Xiao Li, Yuang Wei, Gong Cheng, Lin Zhou, Xinyu Dai, Yuzhong Qu

Figure 1 for GeoSQA: A Benchmark for Scenario-based Question Answering in the Geography Domain at High School Level
Figure 2 for GeoSQA: A Benchmark for Scenario-based Question Answering in the Geography Domain at High School Level

Scenario-based question answering (SQA) has attracted increasing research attention. It typically requires retrieving and integrating knowledge from multiple sources, and applying general knowledge to a specific case described by a scenario. SQA widely exists in the medical, geography, and legal domains---both in practice and in the exams. In this paper, we introduce the GeoSQA dataset. It consists of 1,981 scenarios and 4,110 multiple-choice questions in the geography domain at high school level, where diagrams (e.g., maps, charts) have been manually annotated with natural language descriptions to benefit NLP research. Benchmark results on a variety of state-of-the-art methods for question answering, textual entailment, and reading comprehension demonstrate the unique challenges presented by SQA for future research.

* 6 pages, to appear at the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP 2019) 
Viaarxiv icon