Alert button
Picture for Vijay Viswanathan

Vijay Viswanathan

Alert button

Prompt2Model: Generating Deployable Models from Natural Language Instructions

Aug 23, 2023
Vijay Viswanathan, Chenyang Zhao, Amanda Bertsch, Tongshuang Wu, Graham Neubig

Figure 1 for Prompt2Model: Generating Deployable Models from Natural Language Instructions
Figure 2 for Prompt2Model: Generating Deployable Models from Natural Language Instructions
Figure 3 for Prompt2Model: Generating Deployable Models from Natural Language Instructions
Figure 4 for Prompt2Model: Generating Deployable Models from Natural Language Instructions

Large language models (LLMs) enable system builders today to create competent NLP systems through prompting, where they only need to describe the task in natural language and provide a few examples. However, in other ways, LLMs are a step backward from traditional special-purpose NLP models; they require extensive computational resources for deployment and can be gated behind APIs. In this paper, we propose Prompt2Model, a general-purpose method that takes a natural language task description like the prompts provided to LLMs, and uses it to train a special-purpose model that is conducive to deployment. This is done through a multi-step process of retrieval of existing datasets and pretrained models, dataset generation using LLMs, and supervised fine-tuning on these retrieved and generated datasets. Over three tasks, we demonstrate that given the same few-shot prompt as input, Prompt2Model trains models that outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20% while being up to 700 times smaller. We also show that this data can be used to obtain reliable performance estimates of model performance, enabling model developers to assess model reliability before deployment. Prompt2Model is available open-source at https://github.com/neulab/prompt2model.

* 8 pages 
Viaarxiv icon

Large Language Models Enable Few-Shot Clustering

Jul 02, 2023
Vijay Viswanathan, Kiril Gashteovski, Carolin Lawrence, Tongshuang Wu, Graham Neubig

Figure 1 for Large Language Models Enable Few-Shot Clustering
Figure 2 for Large Language Models Enable Few-Shot Clustering
Figure 3 for Large Language Models Enable Few-Shot Clustering
Figure 4 for Large Language Models Enable Few-Shot Clustering

Unlike traditional unsupervised clustering, semi-supervised clustering allows users to provide meaningful structure to the data, which helps the clustering algorithm to match the user's intent. Existing approaches to semi-supervised clustering require a significant amount of feedback from an expert to improve the clusters. In this paper, we ask whether a large language model can amplify an expert's guidance to enable query-efficient, few-shot semi-supervised text clustering. We show that LLMs are surprisingly effective at improving clustering. We explore three stages where LLMs can be incorporated into clustering: before clustering (improving input features), during clustering (by providing constraints to the clusterer), and after clustering (using LLMs post-correction). We find incorporating LLMs in the first two stages can routinely provide significant improvements in cluster quality, and that LLMs enable a user to make trade-offs between cost and accuracy to produce desired clusters. We release our code and LLM prompts for the public to use.

Viaarxiv icon

DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions

Jun 07, 2023
Vijay Viswanathan, Luyu Gao, Tongshuang Wu, Pengfei Liu, Graham Neubig

Figure 1 for DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions
Figure 2 for DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions
Figure 3 for DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions
Figure 4 for DataFinder: Scientific Dataset Recommendation from Natural Language Descriptions

Modern machine learning relies on datasets to develop and validate research ideas. Given the growth of publicly available data, finding the right dataset to use is increasingly difficult. Any research question imposes explicit and implicit constraints on how well a given dataset will enable researchers to answer this question, such as dataset size, modality, and domain. We operationalize the task of recommending datasets given a short natural language description of a research idea, to help people find relevant datasets for their needs. Dataset recommendation poses unique challenges as an information retrieval problem; datasets are hard to directly index for search and there are no corpora readily available for this task. To facilitate this task, we build the DataFinder Dataset which consists of a larger automatically-constructed training set (17.5K queries) and a smaller expert-annotated evaluation set (392 queries). Using this data, we compare various information retrieval algorithms on our test set and present a superior bi-encoder retriever for text-based dataset recommendation. This system, trained on the DataFinder Dataset, finds more relevant search results than existing third-party dataset search engines. To encourage progress on dataset recommendation, we release our dataset and models to the public.

* To appear at ACL 2023. Code published at https://github.com/viswavi/datafinder 
Viaarxiv icon

A Dataset for N-ary Relation Extraction of Drug Combinations

May 04, 2022
Aryeh Tiktinsky, Vijay Viswanathan, Danna Niezni, Dana Meron Azagury, Yosi Shamay, Hillel Taub-Tabib, Tom Hope, Yoav Goldberg

Figure 1 for A Dataset for N-ary Relation Extraction of Drug Combinations
Figure 2 for A Dataset for N-ary Relation Extraction of Drug Combinations
Figure 3 for A Dataset for N-ary Relation Extraction of Drug Combinations
Figure 4 for A Dataset for N-ary Relation Extraction of Drug Combinations

Combination therapies have become the standard of care for diseases such as cancer, tuberculosis, malaria and HIV. However, the combinatorial set of available multi-drug treatments creates a challenge in identifying effective combination therapies available in a situation. To assist medical professionals in identifying beneficial drug-combinations, we construct an expert-annotated dataset for extracting information about the efficacy of drug combinations from the scientific literature. Beyond its practical utility, the dataset also presents a unique NLP challenge, as the first relation extraction dataset consisting of variable-length relations. Furthermore, the relations in this dataset predominantly require language understanding beyond the sentence level, adding to the challenge of this task. We provide a promising baseline model and identify clear areas for further improvement. We release our dataset, code, and baseline models publicly to encourage the NLP community to participate in this task.

* To appear in NAACL 2022 
Viaarxiv icon

DataLab: A Platform for Data Analysis and Intervention

Feb 25, 2022
Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, Pengfei Liu

Figure 1 for DataLab: A Platform for Data Analysis and Intervention
Figure 2 for DataLab: A Platform for Data Analysis and Intervention
Figure 3 for DataLab: A Platform for Data Analysis and Intervention
Figure 4 for DataLab: A Platform for Data Analysis and Intervention

Despite data's crucial role in machine learning, most existing tools and research tend to focus on systems on top of existing data rather than how to interpret and manipulate data. In this paper, we propose DataLab, a unified data-oriented platform that not only allows users to interactively analyze the characteristics of data, but also provides a standardized interface for different data processing operations. Additionally, in view of the ongoing proliferation of datasets, \toolname has features for dataset recommendation and global vision analysis that help researchers form a better view of the data ecosystem. So far, DataLab covers 1,715 datasets and 3,583 of its transformed version (e.g., hyponyms replacement), where 728 datasets support various analyses (e.g., with respect to gender bias) with the help of 140M samples annotated by 318 feature functions. DataLab is under active development and will be supported going forward. We have released a web platform, web API, Python SDK, PyPI published package and online documentation, which hopefully, can meet the diverse needs of researchers.

* DataLab Web Platform: http://datalab.nlpedia.ai/ 
Viaarxiv icon

Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on Spoken Language Understanding

Jun 29, 2021
Siddhant Arora, Alissa Ostapenko, Vijay Viswanathan, Siddharth Dalmia, Florian Metze, Shinji Watanabe, Alan W Black

Figure 1 for Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on Spoken Language Understanding
Figure 2 for Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on Spoken Language Understanding
Figure 3 for Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on Spoken Language Understanding
Figure 4 for Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on Spoken Language Understanding

Decomposable tasks are complex and comprise of a hierarchy of sub-tasks. Spoken intent prediction, for example, combines automatic speech recognition and natural language understanding. Existing benchmarks, however, typically hold out examples for only the surface-level sub-task. As a result, models with similar performance on these benchmarks may have unobserved performance differences on the other sub-tasks. To allow insightful comparisons between competitive end-to-end architectures, we propose a framework to construct robust test sets using coordinate ascent over sub-task specific utility functions. Given a dataset for a decomposable task, our method optimally creates a test set for each sub-task to individually assess sub-components of the end-to-end model. Using spoken language understanding as a case study, we generate new splits for the Fluent Speech Commands and Snips SmartLights datasets. Each split has two test sets: one with held-out utterances assessing natural language understanding abilities, and one with held-out speakers to test speech processing skills. Our splits identify performance gaps up to 10% between end-to-end systems that were within 1% of each other on the original test sets. These performance gaps allow more realistic and actionable comparisons between different architectures, driving future model development. We release our splits and tools for the community.

* INTERSPEECH 2021 
Viaarxiv icon

CitationIE: Leveraging the Citation Graph for Scientific Information Extraction

Jun 03, 2021
Vijay Viswanathan, Graham Neubig, Pengfei Liu

Figure 1 for CitationIE: Leveraging the Citation Graph for Scientific Information Extraction
Figure 2 for CitationIE: Leveraging the Citation Graph for Scientific Information Extraction
Figure 3 for CitationIE: Leveraging the Citation Graph for Scientific Information Extraction
Figure 4 for CitationIE: Leveraging the Citation Graph for Scientific Information Extraction

Automatically extracting key information from scientific documents has the potential to help scientists work more efficiently and accelerate the pace of scientific progress. Prior work has considered extracting document-level entity clusters and relations end-to-end from raw scientific text, which can improve literature search and help identify methods and materials for a given problem. Despite the importance of this task, most existing works on scientific information extraction (SciIE) consider extraction solely based on the content of an individual paper, without considering the paper's place in the broader literature. In contrast to prior work, we augment our text representations by leveraging a complementary source of document context: the citation graph of referential links between citing and cited papers. On a test set of English-language scientific documents, we show that simple ways of utilizing the structure and content of the citation graph can each lead to significant gains in different scientific information extraction tasks. When these tasks are combined, we observe a sizable improvement in end-to-end information extraction over the state-of-the-art, suggesting the potential for future work along this direction. We release software tools to facilitate citation-aware SciIE development.

* ACL-IJCNLP 2021 camera-ready (long paper in main conference) 
Viaarxiv icon