Alert button
Picture for Eric Nyberg

Eric Nyberg

Alert button

Difference-Masking: Choosing What to Mask in Continued Pretraining

May 23, 2023
Alex Wilf, Syeda Nahida Akter, Leena Mathur, Paul Pu Liang, Sheryl Mathew, Mengrou Shou, Eric Nyberg, Louis-Philippe Morency

Figure 1 for Difference-Masking: Choosing What to Mask in Continued Pretraining
Figure 2 for Difference-Masking: Choosing What to Mask in Continued Pretraining
Figure 3 for Difference-Masking: Choosing What to Mask in Continued Pretraining
Figure 4 for Difference-Masking: Choosing What to Mask in Continued Pretraining

Self-supervised learning (SSL) and the objective of masking-and-predicting in particular have led to promising SSL performance on a variety of downstream tasks. However, while most approaches randomly mask tokens, there is strong intuition from the field of education that deciding what to mask can substantially improve learning outcomes. We introduce Difference-Masking, an approach that automatically chooses what to mask during continued pretraining by considering what makes an unlabelled target domain different from the pretraining domain. Empirically, we find that Difference-Masking outperforms baselines on continued pretraining settings across four diverse language and multimodal video tasks. The cross-task applicability of Difference-Masking supports the effectiveness of our framework for SSL pretraining in language, vision, and other domains.

Viaarxiv icon

Chain-of-Skills: A Configurable Model for Open-domain Question Answering

May 04, 2023
Kaixin Ma, Hao Cheng, Yu Zhang, Xiaodong Liu, Eric Nyberg, Jianfeng Gao

Figure 1 for Chain-of-Skills: A Configurable Model for Open-domain Question Answering
Figure 2 for Chain-of-Skills: A Configurable Model for Open-domain Question Answering
Figure 3 for Chain-of-Skills: A Configurable Model for Open-domain Question Answering
Figure 4 for Chain-of-Skills: A Configurable Model for Open-domain Question Answering

The retrieval model is an indispensable component for real-world knowledge-intensive tasks, e.g., open-domain question answering (ODQA). As separate retrieval skills are annotated for different datasets, recent work focuses on customized methods, limiting the model transferability and scalability. In this work, we propose a modular retriever where individual modules correspond to key skills that can be reused across datasets. Our approach supports flexible skill configurations based on the target domain to boost performance. To mitigate task interference, we design a novel modularization parameterization inspired by sparse Transformer. We demonstrate that our model can benefit from self-supervised pretraining on Wikipedia and fine-tuning using multiple ODQA datasets, both in a multi-task fashion. Our approach outperforms recent self-supervised retrievers in zero-shot evaluations and achieves state-of-the-art fine-tuned retrieval performance on NQ, HotpotQA and OTT-QA.

* ACL 2023 
Viaarxiv icon

Using Implicit Feedback to Improve Question Generation

Apr 26, 2023
Hugo Rodrigues, Eric Nyberg, Luisa Coheur

Figure 1 for Using Implicit Feedback to Improve Question Generation
Figure 2 for Using Implicit Feedback to Improve Question Generation
Figure 3 for Using Implicit Feedback to Improve Question Generation
Figure 4 for Using Implicit Feedback to Improve Question Generation

Question Generation (QG) is a task of Natural Language Processing (NLP) that aims at automatically generating questions from text. Many applications can benefit from automatically generated questions, but often it is necessary to curate those questions, either by selecting or editing them. This task is informative on its own, but it is typically done post-generation, and, thus, the effort is wasted. In addition, most existing systems cannot incorporate this feedback back into them easily. In this work, we present a system, GEN, that learns from such (implicit) feedback. Following a pattern-based approach, it takes as input a small set of sentence/question pairs and creates patterns which are then applied to new unseen sentences. Each generated question, after being corrected by the user, is used as a new seed in the next iteration, so more patterns are created each time. We also take advantage of the corrections made by the user to score the patterns and therefore rank the generated questions. Results show that GEN is able to improve by learning from both levels of implicit feedback when compared to the version with no learning, considering the top 5, 10, and 20 questions. Improvements go up from 10%, depending on the metric and strategy used.

* 27 pages, 8 figures 
Viaarxiv icon

InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers

Jan 08, 2023
Leonid Boytsov, Preksha Patel, Vivek Sourabh, Riddhi Nisar, Sayani Kundu, Ramya Ramanathan, Eric Nyberg

Figure 1 for InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers
Figure 2 for InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers
Figure 3 for InPars-Light: Cost-Effective Unsupervised Training of Efficient Rankers

We carried out a reproducibility study of InPars recipe for unsupervised training of neural rankers. As a by-product of this study, we developed a simple-yet-effective modification of InPars, which we called InPars-light. Unlike InPars, InPars-light uses only a freely available language model BLOOM and 7x-100x smaller ranking models. On all five English retrieval collections (used in the original InPars study) we obtained substantial (7-30%) and statistically significant improvements over BM25 in nDCG or MRR using only a 30M parameter six-layer MiniLM ranker. In contrast, in the InPars study only a 100x larger MonoT5-3B model consistently outperformed BM25, whereas their smaller MonoT5-220M model (which is still 7x larger than our MiniLM ranker), outperformed BM25 only on MS MARCO and TREC DL 2020. In a purely unsupervised setting, our 435M parameter DeBERTA v3 ranker was roughly at par with the 7x larger MonoT5-3B: In fact, on three out of five datasets, it slightly outperformed MonoT5-3B. Finally, these good results were achieved by re-ranking only 100 candidate documents compared to 1000 used in InPars. We believe that InPars-light is the first truly cost-effective prompt-based unsupervised recipe to train and deploy neural ranking models that outperform BM25.

Viaarxiv icon

Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation

Dec 21, 2022
Gyan Tatiya, Jonathan Francis, Luca Bondi, Ingrid Navarro, Eric Nyberg, Jivko Sinapov, Jean Oh

Figure 1 for Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation
Figure 2 for Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation
Figure 3 for Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation
Figure 4 for Knowledge-driven Scene Priors for Semantic Audio-Visual Embodied Navigation

Generalisation to unseen contexts remains a challenge for embodied navigation agents. In the context of semantic audio-visual navigation (SAVi) tasks, the notion of generalisation should include both generalising to unseen indoor visual scenes as well as generalising to unheard sounding objects. However, previous SAVi task definitions do not include evaluation conditions on truly novel sounding objects, resorting instead to evaluating agents on unheard sound clips of known objects; meanwhile, previous SAVi methods do not include explicit mechanisms for incorporating domain knowledge about object and region semantics. These weaknesses limit the development and assessment of models' abilities to generalise their learned experience. In this work, we introduce the use of knowledge-driven scene priors in the semantic audio-visual embodied navigation task: we combine semantic information from our novel knowledge graph that encodes object-region relations, spatial knowledge from dual Graph Encoder Networks, and background knowledge from a series of pre-training tasks -- all within a reinforcement learning framework for audio-visual navigation. We also define a new audio-visual navigation sub-task, where agents are evaluated on novel sounding objects, as opposed to unheard clips of known objects. We show improvements over strong baselines in generalisation to unseen regions and novel sounding objects, within the Habitat-Matterport3D simulation environment, under the SoundSpaces task.

* 19 pages, 8 figures, 9 tables 
Viaarxiv icon

Distribution-aware Goal Prediction and Conformant Model-based Planning for Safe Autonomous Driving

Dec 16, 2022
Jonathan Francis, Bingqing Chen, Weiran Yao, Eric Nyberg, Jean Oh

Figure 1 for Distribution-aware Goal Prediction and Conformant Model-based Planning for Safe Autonomous Driving
Figure 2 for Distribution-aware Goal Prediction and Conformant Model-based Planning for Safe Autonomous Driving
Figure 3 for Distribution-aware Goal Prediction and Conformant Model-based Planning for Safe Autonomous Driving
Figure 4 for Distribution-aware Goal Prediction and Conformant Model-based Planning for Safe Autonomous Driving

The feasibility of collecting a large amount of expert demonstrations has inspired growing research interests in learning-to-drive settings, where models learn by imitating the driving behaviour from experts. However, exclusively relying on imitation can limit agents' generalisability to novel scenarios that are outside the support of the training data. In this paper, we address this challenge by factorising the driving task, based on the intuition that modular architectures are more generalisable and more robust to changes in the environment compared to monolithic, end-to-end frameworks. Specifically, we draw inspiration from the trajectory forecasting community and reformulate the learning-to-drive task as obstacle-aware perception and grounding, distribution-aware goal prediction, and model-based planning. Firstly, we train the obstacle-aware perception module to extract salient representation of the visual context. Then, we learn a multi-modal goal distribution by performing conditional density-estimation using normalising flow. Finally, we ground candidate trajectory predictions road geometry, and plan the actions based on on vehicle dynamics. Under the CARLA simulator, we report state-of-the-art results on the CARNOVEL benchmark.

* Accepted: 1st Workshop on Safe Learning for Autonomous Driving, at the International Conference on Machine Learning (ICML 2022); Best Paper Award 
Viaarxiv icon

Coalescing Global and Local Information for Procedural Text Understanding

Aug 26, 2022
Kaixin Ma, Filip Ilievski, Jonathan Francis, Eric Nyberg, Alessandro Oltramari

Figure 1 for Coalescing Global and Local Information for Procedural Text Understanding
Figure 2 for Coalescing Global and Local Information for Procedural Text Understanding
Figure 3 for Coalescing Global and Local Information for Procedural Text Understanding
Figure 4 for Coalescing Global and Local Information for Procedural Text Understanding

Procedural text understanding is a challenging language reasoning task that requires models to track entity states across the development of a narrative. A complete procedural understanding solution should combine three core aspects: local and global views of the inputs, and global view of outputs. Prior methods considered a subset of these aspects, resulting in either low precision or low recall. In this paper, we propose Coalescing Global and Local Information (CGLI), a new model that builds entity- and timestep-aware input representations (local input) considering the whole context (global input), and we jointly model the entity states with a structured prediction objective (global output). Thus, CGLI simultaneously optimizes for both precision and recall. We extend CGLI with additional output layers and integrate it into a story reasoning framework. Extensive experiments on a popular procedural text understanding dataset show that our model achieves state-of-the-art results; experiments on a story reasoning benchmark show the positive impact of our model on downstream reasoning.

* COLING 2022 
Viaarxiv icon

Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding

Jul 04, 2022
Leonid Boytsov, Tianyi Lin, Fangwei Gao, Yutian Zhao, Jeffrey Huang, Eric Nyberg

Figure 1 for Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding
Figure 2 for Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding
Figure 3 for Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding
Figure 4 for Understanding Performance of Long-Document Ranking Models through Comprehensive Evaluation and Leaderboarding

We carry out a comprehensive evaluation of 13 recent models for ranking of long documents using two popular collections (MS MARCO documents and Robust04). Our model zoo includes two specialized Transformer models (such as Longformer) that can process long documents without the need to split them. Along the way, we document several difficulties regarding training and comparing such models. Somewhat surprisingly, we find the simple FirstP baseline (truncating documents to satisfy the input-sequence constraint of a typical Transformer model) to be quite effective. We analyze the distribution of relevant passages (inside documents) to explain this phenomenon. We further argue that, despite their widespread use, Robust04 and MS MARCO documents are not particularly useful for benchmarking of long-document models.

Viaarxiv icon

Table Retrieval May Not Necessitate Table-specific Model Design

May 19, 2022
Zhiruo Wang, Zhengbao Jiang, Eric Nyberg, Graham Neubig

Figure 1 for Table Retrieval May Not Necessitate Table-specific Model Design
Figure 2 for Table Retrieval May Not Necessitate Table-specific Model Design
Figure 3 for Table Retrieval May Not Necessitate Table-specific Model Design
Figure 4 for Table Retrieval May Not Necessitate Table-specific Model Design

Tables are an important form of structured data for both human and machine readers alike, providing answers to questions that cannot, or cannot easily, be found in texts. Recent work has designed special models and training paradigms for table-related tasks such as table-based question answering and table retrieval. Though effective, they add complexity in both modeling and data acquisition compared to generic text solutions and obscure which elements are truly beneficial. In this work, we focus on the task of table retrieval, and ask: "is table-specific model design necessary for table retrieval, or can a simpler text-based model be effectively used to achieve a similar result?" First, we perform an analysis on a table-based portion of the Natural Questions dataset (NQ-table), and find that structure plays a negligible role in more than 70% of the cases. Based on this, we experiment with a general Dense Passage Retriever (DPR) based on text and a specialized Dense Table Retriever (DTR) that uses table-specific model designs. We find that DPR performs well without any table-specific design and training, and even achieves superior results compared to DTR when fine-tuned on properly linearized tables. We then experiment with three modules to explicitly encode table structures, namely auxiliary row/column embeddings, hard attention masks, and soft relation-based attention biases. However, none of these yielded significant improvements, suggesting that table-specific model design may not be necessary for table retrieval.

* 11 pages total, 4 figures 
Viaarxiv icon

Learn-to-Race Challenge 2022: Benchmarking Safe Learning and Cross-domain Generalisation in Autonomous Racing

May 10, 2022
Jonathan Francis, Bingqing Chen, Siddha Ganju, Sidharth Kathpal, Jyotish Poonganam, Ayush Shivani, Vrushank Vyas, Sahika Genc, Ivan Zhukov, Max Kumskoy, Anirudh Koul, Jean Oh, Eric Nyberg

Figure 1 for Learn-to-Race Challenge 2022: Benchmarking Safe Learning and Cross-domain Generalisation in Autonomous Racing
Figure 2 for Learn-to-Race Challenge 2022: Benchmarking Safe Learning and Cross-domain Generalisation in Autonomous Racing
Figure 3 for Learn-to-Race Challenge 2022: Benchmarking Safe Learning and Cross-domain Generalisation in Autonomous Racing
Figure 4 for Learn-to-Race Challenge 2022: Benchmarking Safe Learning and Cross-domain Generalisation in Autonomous Racing

We present the results of our autonomous racing virtual challenge, based on the newly-released Learn-to-Race (L2R) simulation framework, which seeks to encourage interdisciplinary research in autonomous driving and to help advance the state of the art on a realistic benchmark. Analogous to racing being used to test cutting-edge vehicles, we envision autonomous racing to serve as a particularly challenging proving ground for autonomous agents as: (i) they need to make sub-second, safety-critical decisions in a complex, fast-changing environment; and (ii) both perception and control must be robust to distribution shifts, novel road features, and unseen obstacles. Thus, the main goal of the challenge is to evaluate the joint safety, performance, and generalisation capabilities of reinforcement learning agents on multi-modal perception, through a two-stage process. In the first stage of the challenge, we evaluate an autonomous agent's ability to drive as fast as possible, while adhering to safety constraints. In the second stage, we additionally require the agent to adapt to an unseen racetrack through safe exploration. In this paper, we describe the new L2R Task 2.0 benchmark, with refined metrics and baseline approaches. We also provide an overview of deployment, evaluation, and rankings for the inaugural instance of the L2R Autonomous Racing Virtual Challenge (supported by Carnegie Mellon University, Arrival Ltd., AICrowd, Amazon Web Services, and Honda Research), which officially used the new L2R Task 2.0 benchmark and received over 20,100 views, 437 active participants, 46 teams, and 733 model submissions -- from 88+ unique institutions, in 58+ different countries. Finally, we release leaderboard results from the challenge and provide description of the two top-ranking approaches in cross-domain model transfer, across multiple sensor configurations and simulated races.

* 20 pages, 4 figures, 2 tables 
Viaarxiv icon