Alert button
Picture for Adithya Sagar

Adithya Sagar

Alert button

Data-Efficiency with a Single GPU: An Exploration of Transfer Methods for Small Language Models

Oct 08, 2022
Alon Albalak, Akshat Shrivastava, Chinnadhurai Sankar, Adithya Sagar, Mike Ross

Figure 1 for Data-Efficiency with a Single GPU: An Exploration of Transfer Methods for Small Language Models
Figure 2 for Data-Efficiency with a Single GPU: An Exploration of Transfer Methods for Small Language Models
Figure 3 for Data-Efficiency with a Single GPU: An Exploration of Transfer Methods for Small Language Models
Figure 4 for Data-Efficiency with a Single GPU: An Exploration of Transfer Methods for Small Language Models

Multi-task learning (MTL), instruction tuning, and prompting have recently been shown to improve the generalizability of large language models to new tasks. However, the benefits of such methods are less well-documented in smaller language models, with some studies finding contradictory results. In this work, we explore and isolate the effects of (i) model size, (ii) general purpose MTL, (iii) in-domain MTL, (iv) instruction tuning, and (v) few-shot fine-tuning for models with fewer than 500 million parameters. Our experiments in the zero-shot setting demonstrate that models gain 31% relative improvement, on average, from general purpose MTL, with an additional 37.6% relative gain from in-domain MTL. Contradictory to prior works on large models, we find that instruction tuning provides a modest 2% performance improvement for small models.

Viaarxiv icon

RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing

Sep 21, 2021
Vivek Gupta, Akshat Shrivastava, Adithya Sagar, Armen Aghajanyan, Denis Savenkov

Figure 1 for RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing
Figure 2 for RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing
Figure 3 for RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing
Figure 4 for RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing

While large pre-trained language models accumulate a lot of knowledge in their parameters, it has been demonstrated that augmenting it with non-parametric retrieval-based memory has a number of benefits from accuracy improvements to data efficiency for knowledge-focused tasks, such as question answering. In this paper, we are applying retrieval-based modeling ideas to the problem of multi-domain task-oriented semantic parsing for conversational assistants. Our approach, RetroNLU, extends a sequence-to-sequence model architecture with a retrieval component, used to fetch existing similar examples and provide them as an additional input to the model. In particular, we analyze two settings, where we augment an input with (a) retrieved nearest neighbor utterances (utterance-nn), and (b) ground-truth semantic parses of nearest neighbor utterances (semparse-nn). Our technique outperforms the baseline method by 1.5% absolute macro-F1, especially at the low resource setting, matching the baseline model accuracy with only 40% of the data. Furthermore, we analyze the nearest neighbor retrieval component's quality, model sensitivity and break down the performance for semantic parses of different utterance complexity.

* 12 pages, 9 figures, 5 Tables 
Viaarxiv icon

Lattice-based Improvements for Voice Triggering Using Graph Neural Networks

Jan 25, 2020
Pranay Dighe, Saurabh Adya, Nuoyu Li, Srikanth Vishnubhotla, Devang Naik, Adithya Sagar, Ying Ma, Stephen Pulman, Jason Williams

Figure 1 for Lattice-based Improvements for Voice Triggering Using Graph Neural Networks
Figure 2 for Lattice-based Improvements for Voice Triggering Using Graph Neural Networks
Figure 3 for Lattice-based Improvements for Voice Triggering Using Graph Neural Networks
Figure 4 for Lattice-based Improvements for Voice Triggering Using Graph Neural Networks

Voice-triggered smart assistants often rely on detection of a trigger-phrase before they start listening for the user request. Mitigation of false triggers is an important aspect of building a privacy-centric non-intrusive smart assistant. In this paper, we address the task of false trigger mitigation (FTM) using a novel approach based on analyzing automatic speech recognition (ASR) lattices using graph neural networks (GNN). The proposed approach uses the fact that decoding lattice of a falsely triggered audio exhibits uncertainties in terms of many alternative paths and unexpected words on the lattice arcs as compared to the lattice of a correctly triggered audio. A pure trigger-phrase detector model doesn't fully utilize the intent of the user speech whereas by using the complete decoding lattice of user audio, we can effectively mitigate speech not intended for the smart assistant. We deploy two variants of GNNs in this paper based on 1) graph convolution layers and 2) self-attention mechanism respectively. Our experiments demonstrate that GNNs are highly accurate in FTM task by mitigating ~87% of false triggers at 99% true positive rate (TPR). Furthermore, the proposed models are fast to train and efficient in parameter requirements.

Viaarxiv icon

Active Learning for Domain Classification in a Commercial Spoken Personal Assistant

Aug 29, 2019
Xi C. Chen, Adithya Sagar, Justine T. Kao, Tony Y. Li, Christopher Klein, Stephen Pulman, Ashish Garg, Jason D. Williams

Figure 1 for Active Learning for Domain Classification in a Commercial Spoken Personal Assistant
Figure 2 for Active Learning for Domain Classification in a Commercial Spoken Personal Assistant
Figure 3 for Active Learning for Domain Classification in a Commercial Spoken Personal Assistant
Figure 4 for Active Learning for Domain Classification in a Commercial Spoken Personal Assistant

We describe a method for selecting relevant new training data for the LSTM-based domain selection component of our personal assistant system. Adding more annotated training data for any ML system typically improves accuracy, but only if it provides examples not already adequately covered in the existing data. However, obtaining, selecting, and labeling relevant data is expensive. This work presents a simple technique that automatically identifies new helpful examples suitable for human annotation. Our experimental results show that the proposed method, compared with random-selection and entropy-based methods, leads to higher accuracy improvements given a fixed annotation budget. Although developed and tested in the setting of a commercial intelligent assistant, the technique is of wider applicability.

Viaarxiv icon