Alert button
Picture for Rajiv Jain

Rajiv Jain

Alert button

User-Entity Differential Privacy in Learning Natural Language Models

Nov 09, 2022
Phung Lai, NhatHai Phan, Tong Sun, Rajiv Jain, Franck Dernoncourt, Jiuxiang Gu, Nikolaos Barmpalios

Figure 1 for User-Entity Differential Privacy in Learning Natural Language Models
Figure 2 for User-Entity Differential Privacy in Learning Natural Language Models
Figure 3 for User-Entity Differential Privacy in Learning Natural Language Models
Figure 4 for User-Entity Differential Privacy in Learning Natural Language Models

In this paper, we introduce a novel concept of user-entity differential privacy (UeDP) to provide formal privacy protection simultaneously to both sensitive entities in textual data and data owners in learning natural language models (NLMs). To preserve UeDP, we developed a novel algorithm, called UeDP-Alg, optimizing the trade-off between privacy loss and model utility with a tight sensitivity bound derived from seamlessly combining user and sensitive entity sampling processes. An extensive theoretical analysis and evaluation show that our UeDP-Alg outperforms baseline approaches in model utility under the same privacy budget consumption on several NLM tasks, using benchmark datasets.

* Accepted at IEEE BigData 2022 
Viaarxiv icon

Certified Neural Network Watermarks with Randomized Smoothing

Jul 16, 2022
Arpit Bansal, Ping-yeh Chiang, Michael Curry, Rajiv Jain, Curtis Wigington, Varun Manjunatha, John P Dickerson, Tom Goldstein

Figure 1 for Certified Neural Network Watermarks with Randomized Smoothing
Figure 2 for Certified Neural Network Watermarks with Randomized Smoothing
Figure 3 for Certified Neural Network Watermarks with Randomized Smoothing
Figure 4 for Certified Neural Network Watermarks with Randomized Smoothing

Watermarking is a commonly used strategy to protect creators' rights to digital images, videos and audio. Recently, watermarking methods have been extended to deep learning models -- in principle, the watermark should be preserved when an adversary tries to copy the model. However, in practice, watermarks can often be removed by an intelligent adversary. Several papers have proposed watermarking methods that claim to be empirically resistant to different types of removal attacks, but these new techniques often fail in the face of new or better-tuned adversaries. In this paper, we propose a certifiable watermarking method. Using the randomized smoothing technique proposed in Chiang et al., we show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold. In addition to being certifiable, our watermark is also empirically more robust compared to previous watermarking methods. Our experiments can be reproduced with code at https://github.com/arpitbansal297/Certified_Watermarks

* ICML 2022  
* ICML 2022 
Viaarxiv icon

Unified Pretraining Framework for Document Understanding

Apr 28, 2022
Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, Nikolaos Barmpalios, Rajiv Jain, Ani Nenkova, Tong Sun

Figure 1 for Unified Pretraining Framework for Document Understanding
Figure 2 for Unified Pretraining Framework for Document Understanding
Figure 3 for Unified Pretraining Framework for Document Understanding
Figure 4 for Unified Pretraining Framework for Document Understanding

Document intelligence automates the extraction of information from documents and supports many business applications. Recent self-supervised learning methods on large-scale unlabeled document datasets have opened up promising directions towards reducing annotation efforts by training models with self-supervised objectives. However, most of the existing document pretraining methods are still language-dominated. We present UDoc, a new unified pretraining framework for document understanding. UDoc is designed to support most document understanding tasks, extending the Transformer to take multimodal embeddings as input. Each input element is composed of words and visual features from a semantic region of the input document image. An important feature of UDoc is that it learns a generic representation by making use of three self-supervised losses, encouraging the representation to model sentences, learn similarities, and align modalities. Extensive empirical analysis demonstrates that the pretraining procedure learns better joint representations and leads to improvements in downstream tasks.

* 12 pages, 4 figures, NeurIPS 2021 (Updated Camera Ready) 
Viaarxiv icon

MACRONYM: A Large-Scale Dataset for Multilingual and Multi-Domain Acronym Extraction

Feb 19, 2022
Amir Pouran Ben Veyseh, Nicole Meister, Seunghyun Yoon, Rajiv Jain, Franck Dernoncourt, Thien Huu Nguyen

Figure 1 for MACRONYM: A Large-Scale Dataset for Multilingual and Multi-Domain Acronym Extraction
Figure 2 for MACRONYM: A Large-Scale Dataset for Multilingual and Multi-Domain Acronym Extraction

Acronym extraction is the task of identifying acronyms and their expanded forms in texts that is necessary for various NLP applications. Despite major progress for this task in recent years, one limitation of existing AE research is that they are limited to the English language and certain domains (i.e., scientific and biomedical). As such, challenges of AE in other languages and domains is mainly unexplored. Lacking annotated datasets in multiple languages and domains has been a major issue to hinder research in this area. To address this limitation, we propose a new dataset for multilingual multi-domain AE. Specifically, 27,200 sentences in 6 typologically different languages and 2 domains, i.e., Legal and Scientific, is manually annotated for AE. Our extensive experiments on the proposed dataset show that AE in different languages and different learning settings has unique challenges, emphasizing the necessity of further research on multilingual and multi-domain AE.

Viaarxiv icon

CLAUSEREC: A Clause Recommendation Framework for AI-aided Contract Authoring

Oct 26, 2021
Vinay Aggarwal, Aparna Garimella, Balaji Vasan Srinivasan, Anandhavelu N, Rajiv Jain

Figure 1 for CLAUSEREC: A Clause Recommendation Framework for AI-aided Contract Authoring
Figure 2 for CLAUSEREC: A Clause Recommendation Framework for AI-aided Contract Authoring
Figure 3 for CLAUSEREC: A Clause Recommendation Framework for AI-aided Contract Authoring
Figure 4 for CLAUSEREC: A Clause Recommendation Framework for AI-aided Contract Authoring

Contracts are a common type of legal document that frequent in several day-to-day business workflows. However, there has been very limited NLP research in processing such documents, and even lesser in generating them. These contracts are made up of clauses, and the unique nature of these clauses calls for specific methods to understand and generate such documents. In this paper, we introduce the task of clause recommendation, asa first step to aid and accelerate the author-ing of contract documents. We propose a two-staged pipeline to first predict if a specific clause type is relevant to be added in a contract, and then recommend the top clauses for the given type based on the contract context. We pretrain BERT on an existing library of clauses with two additional tasks and use it for our prediction and recommendation. We experiment with classification methods and similarity-based heuristics for clause relevance prediction, and generation-based methods for clause recommendation, and evaluate the results from various methods on several clause types. We provide analyses on the results, and further outline the advantages and limitations of the various methods for this line of research.

Viaarxiv icon

SelfDoc: Self-Supervised Document Representation Learning

Jun 07, 2021
Peizhao Li, Jiuxiang Gu, Jason Kuen, Vlad I. Morariu, Handong Zhao, Rajiv Jain, Varun Manjunatha, Hongfu Liu

Figure 1 for SelfDoc: Self-Supervised Document Representation Learning
Figure 2 for SelfDoc: Self-Supervised Document Representation Learning
Figure 3 for SelfDoc: Self-Supervised Document Representation Learning
Figure 4 for SelfDoc: Self-Supervised Document Representation Learning

We propose SelfDoc, a task-agnostic pre-training framework for document image understanding. Because documents are multimodal and are intended for sequential reading, our framework exploits the positional, textual, and visual information of every semantically meaningful component in a document, and it models the contextualization between each block of content. Unlike existing document pre-training models, our model is coarse-grained instead of treating individual words as input, therefore avoiding an overly fine-grained with excessive contextualization. Beyond that, we introduce cross-modal learning in the model pre-training phase to fully leverage multimodal information from unlabeled documents. For downstream usage, we propose a novel modality-adaptive attention mechanism for multimodal feature fusion by adaptively emphasizing language and vision signals. Our framework benefits from self-supervised pre-training on documents without requiring annotations by a feature masking training strategy. It achieves superior performance on multiple downstream tasks with significantly fewer document images used in the pre-training stage compared to previous works.

* To appear in CVPR'2021 
Viaarxiv icon

IGA : An Intent-Guided Authoring Assistant

Apr 14, 2021
Simeng Sun, Wenlong Zhao, Varun Manjunatha, Rajiv Jain, Vlad Morariu, Franck Dernoncourt, Balaji Vasan Srinivasan, Mohit Iyyer

Figure 1 for IGA : An Intent-Guided Authoring Assistant
Figure 2 for IGA : An Intent-Guided Authoring Assistant
Figure 3 for IGA : An Intent-Guided Authoring Assistant
Figure 4 for IGA : An Intent-Guided Authoring Assistant

While large-scale pretrained language models have significantly improved writing assistance functionalities such as autocomplete, more complex and controllable writing assistants have yet to be explored. We leverage advances in language modeling to build an interactive writing assistant that generates and rephrases text according to fine-grained author specifications. Users provide input to our Intent-Guided Assistant (IGA) in the form of text interspersed with tags that correspond to specific rhetorical directives (e.g., adding description or contrast, or rephrasing a particular sentence). We fine-tune a language model on a dataset heuristically-labeled with author intent, which allows IGA to fill in these tags with generated text that users can subsequently edit to their liking. A series of automatic and crowdsourced evaluations confirm the quality of IGA's generated outputs, while a small-scale user study demonstrates author preference for IGA over baseline methods in a creative writing task. We release our dataset, code, and demo to spur further research into AI-assisted writing.

* 13 pages 
Viaarxiv icon

Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU models

Mar 18, 2021
Mengnan Du, Varun Manjunatha, Rajiv Jain, Ruchi Deshpande, Franck Dernoncourt, Jiuxiang Gu, Tong Sun, Xia Hu

Figure 1 for Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU models
Figure 2 for Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU models
Figure 3 for Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU models
Figure 4 for Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU models

Recent studies indicate that NLU models are prone to rely on shortcut features for prediction, without achieving true language understanding. As a result, these models fail to generalize to real-world out-of-distribution data. In this work, we show that the words in the NLU training set can be modeled as a long-tailed distribution. There are two findings: 1) NLU models have strong preference for features located at the head of the long-tailed distribution, and 2) Shortcut features are picked up during very early few iterations of the model training. These two observations are further employed to formulate a measurement which can quantify the shortcut degree of each training sample. Based on this shortcut measurement, we propose a shortcut mitigation framework LGTR, to suppress the model from making overconfident predictions for samples with large shortcut degree. Experimental results on three NLU benchmarks demonstrate that our long-tailed distribution explanation accurately reflects the shortcut learning behavior of NLU models. Experimental analysis further indicates that LGTR can improve the generalization accuracy on OOD data, while preserving the accuracy on in-distribution data.

* Accepted by NAACL 2021 
Viaarxiv icon