Alert button
Picture for Alex Gittens

Alex Gittens

Alert button

Improving Neural Ranking Models with Traditional IR Methods

Aug 29, 2023
Anik Saha, Oktie Hassanzadeh, Alex Gittens, Jian Ni, Kavitha Srinivas, Bulent Yener

Neural ranking methods based on large transformer models have recently gained significant attention in the information retrieval community, and have been adopted by major commercial solutions. Nevertheless, they are computationally expensive to create, and require a great deal of labeled data for specialized corpora. In this paper, we explore a low resource alternative which is a bag-of-embedding model for document retrieval and find that it is competitive with large transformer models fine tuned on information retrieval tasks. Our results show that a simple combination of TF-IDF, a traditional keyword matching method, with a shallow embedding model provides a low cost path to compete well with the performance of complex neural ranking models on 3 datasets. Furthermore, adding TF-IDF measures improves the performance of large-scale fine tuned models on these tasks.

* Short paper, 4 pages 
Viaarxiv icon

A Cross-Domain Evaluation of Approaches for Causal Knowledge Extraction

Aug 07, 2023
Anik Saha, Oktie Hassanzadeh, Alex Gittens, Jian Ni, Kavitha Srinivas, Bulent Yener

Causal knowledge extraction is the task of extracting relevant causes and effects from text by detecting the causal relation. Although this task is important for language understanding and knowledge discovery, recent works in this domain have largely focused on binary classification of a text segment as causal or non-causal. In this regard, we perform a thorough analysis of three sequence tagging models for causal knowledge extraction and compare it with a span based approach to causality extraction. Our experiments show that embeddings from pre-trained language models (e.g. BERT) provide a significant performance boost on this task compared to previous state-of-the-art models with complex architectures. We observe that span based models perform better than simple sequence tagging models based on BERT across all 4 data sets from diverse domains with different types of cause-effect phrases.

Viaarxiv icon

Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning

May 31, 2023
Deniz Koyuncu, Alex Gittens, Bülent Yener, Moti Yung

Figure 1 for Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning
Figure 2 for Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning
Figure 3 for Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning
Figure 4 for Deception by Omission: Using Adversarial Missingness to Poison Causal Structure Learning

Inference of causal structures from observational data is a key component of causal machine learning; in practice, this data may be incompletely observed. Prior work has demonstrated that adversarial perturbations of completely observed training data may be used to force the learning of inaccurate causal structural models (SCMs). However, when the data can be audited for correctness (e.g., it is crytographically signed by its source), this adversarial mechanism is invalidated. This work introduces a novel attack methodology wherein the adversary deceptively omits a portion of the true training data to bias the learned causal structures in a desired manner. Theoretically sound attack mechanisms are derived for the case of arbitrary SCMs, and a sample-efficient learning-based heuristic is given for Gaussian SCMs. Experimental validation of these approaches on real and synthetic data sets demonstrates the effectiveness of adversarial missingness attacks at deceiving popular causal structure learning algorithms.

Viaarxiv icon

Reduced Label Complexity For Tight $\ell_2$ Regression

May 12, 2023
Alex Gittens, Malik Magdon-Ismail

Given data ${\rm X}\in\mathbb{R}^{n\times d}$ and labels $\mathbf{y}\in\mathbb{R}^{n}$ the goal is find $\mathbf{w}\in\mathbb{R}^d$ to minimize $\Vert{\rm X}\mathbf{w}-\mathbf{y}\Vert^2$. We give a polynomial algorithm that, \emph{oblivious to $\mathbf{y}$}, throws out $n/(d+\sqrt{n})$ data points and is a $(1+d/n)$-approximation to optimal in expectation. The motivation is tight approximation with reduced label complexity (number of labels revealed). We reduce label complexity by $\Omega(\sqrt{n})$. Open question: Can label complexity be reduced by $\Omega(n)$ with tight $(1+d/n)$-approximation?

Viaarxiv icon

Word Sense Induction with Knowledge Distillation from BERT

Apr 20, 2023
Anik Saha, Alex Gittens, Bulent Yener

Figure 1 for Word Sense Induction with Knowledge Distillation from BERT
Figure 2 for Word Sense Induction with Knowledge Distillation from BERT
Figure 3 for Word Sense Induction with Knowledge Distillation from BERT
Figure 4 for Word Sense Induction with Knowledge Distillation from BERT

Pre-trained contextual language models are ubiquitously employed for language understanding tasks, but are unsuitable for resource-constrained systems. Noncontextual word embeddings are an efficient alternative in these settings. Such methods typically use one vector to encode multiple different meanings of a word, and incur errors due to polysemy. This paper proposes a two-stage method to distill multiple word senses from a pre-trained language model (BERT) by using attention over the senses of a word in a context and transferring this sense information to fit multi-sense embeddings in a skip-gram-like framework. We demonstrate an effective approach to training the sense disambiguation mechanism in our model with a distribution over word senses extracted from the output layer embeddings of BERT. Experiments on the contextual word similarity and sense induction tasks show that this method is superior to or competitive with state-of-the-art multi-sense embeddings on multiple benchmark data sets, and experiments with an embedding-based topic model (ETM) demonstrates the benefits of using this multi-sense embedding in a downstream application.

Viaarxiv icon

Simple Disentanglement of Style and Content in Visual Representations

Feb 20, 2023
Lilian Ngweta, Subha Maity, Alex Gittens, Yuekai Sun, Mikhail Yurochkin

Figure 1 for Simple Disentanglement of Style and Content in Visual Representations
Figure 2 for Simple Disentanglement of Style and Content in Visual Representations
Figure 3 for Simple Disentanglement of Style and Content in Visual Representations
Figure 4 for Simple Disentanglement of Style and Content in Visual Representations

Learning visual representations with interpretable features, i.e., disentangled representations, remains a challenging problem. Existing methods demonstrate some success but are hard to apply to large-scale vision datasets like ImageNet. In this work, we propose a simple post-processing framework to disentangle content and style in learned representations from pre-trained vision models. We model the pre-trained features probabilistically as linearly entangled combinations of the latent content and style factors and develop a simple disentanglement algorithm based on the probabilistic model. We show that the method provably disentangles content and style features and verify its efficacy empirically. Our post-processed features yield significant domain generalization performance improvements when the distribution shift occurs due to style changes or style-related spurious correlations.

Viaarxiv icon

Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

Jul 08, 2021
Daniel Park, Haidar Khan, Azer Khan, Alex Gittens, Bülent Yener

Figure 1 for Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models
Figure 2 for Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models
Figure 3 for Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models
Figure 4 for Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models

Adversarial examples pose a threat to deep neural network models in a variety of scenarios, from settings where the adversary has complete knowledge of the model in a "white box" setting and to the opposite in a "black box" setting. In this paper, we explore the use of output randomization as a defense against attacks in both the black box and white box models and propose two defenses. In the first defense, we propose output randomization at test time to thwart finite difference attacks in black box settings. Since this type of attack relies on repeated queries to the model to estimate gradients, we investigate the use of randomization to thwart such adversaries from successfully creating adversarial examples. We empirically show that this defense can limit the success rate of a black box adversary using the Zeroth Order Optimization attack to 0%. Secondly, we propose output randomization training as a defense against white box adversaries. Unlike prior approaches that use randomization, our defense does not require its use at test time, eliminating the Backward Pass Differentiable Approximation attack, which was shown to be effective against other randomization defenses. Additionally, this defense has low overhead and is easily implemented, allowing it to be used together with other defenses across various model architectures. We evaluate output randomization training against the Projected Gradient Descent attacker and show that the defense can reduce the PGD attack's success rate down to 12% when using cross-entropy loss.

* This is a substantially changed version of an earlier preprint (arXiv:1905.09871) 
Viaarxiv icon

Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation

Jun 08, 2021
Gabriel Orlanski, Alex Gittens

Figure 1 for Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation
Figure 2 for Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation
Figure 3 for Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation
Figure 4 for Reading StackOverflow Encourages Cheating: Adding Question Text Improves Extractive Code Generation

Answering a programming question using only its title is difficult as salient contextual information is omitted. Based on this observation, we present a corpus of over 40,000 StackOverflow question texts to be used in conjunction with their corresponding intents from the CoNaLa dataset (Yin et al., 2018). Using both the intent and question body, we use BART to establish a baseline BLEU score of 34.35 for this new task. We find further improvements of $2.8\%$ by combining the mined CoNaLa data with the labeled data to achieve a 35.32 BLEU score. We evaluate prior state-of-the-art CoNaLa models with this additional data and find that our proposed method of using the body and mined data beats the BLEU score of the prior state-of-the-art by $71.96\%$. Finally, we perform ablations to demonstrate that BART is an unsupervised multimodal learner and examine its extractive behavior. The code and data can be found https://github.com/gabeorlanski/stackoverflow-encourages-cheating.

* To be published in ACL-IJCNLP NLP4Prog workshop. (The First Workshop on Natural Language Processing for Programming) 
Viaarxiv icon

Learning Fair Canonical Polyadical Decompositions using a Kernel Independence Criterion

Apr 27, 2021
Kevin Kim, Alex Gittens

Figure 1 for Learning Fair Canonical Polyadical Decompositions using a Kernel Independence Criterion
Figure 2 for Learning Fair Canonical Polyadical Decompositions using a Kernel Independence Criterion
Figure 3 for Learning Fair Canonical Polyadical Decompositions using a Kernel Independence Criterion

This work proposes to learn fair low-rank tensor decompositions by regularizing the Canonical Polyadic Decomposition factorization with the kernel Hilbert-Schmidt independence criterion (KHSIC). It is shown, theoretically and empirically, that a small KHSIC between a latent factor and the sensitive features guarantees approximate statistical parity. The proposed algorithm surpasses the state-of-the-art algorithm, FATR (Zhu et al., 2018), in controlling the trade-off between fairness and residual fit on synthetic and real data sets.

Viaarxiv icon

NoisyCUR: An algorithm for two-cost budgeted matrix completion

Apr 16, 2021
Dong Hu, Alex Gittens, Malik Magdon-Ismail

Figure 1 for NoisyCUR: An algorithm for two-cost budgeted matrix completion
Figure 2 for NoisyCUR: An algorithm for two-cost budgeted matrix completion
Figure 3 for NoisyCUR: An algorithm for two-cost budgeted matrix completion
Figure 4 for NoisyCUR: An algorithm for two-cost budgeted matrix completion

Matrix completion is a ubiquitous tool in machine learning and data analysis. Most work in this area has focused on the number of observations necessary to obtain an accurate low-rank approximation. In practice, however, the cost of observations is an important limiting factor, and experimentalists may have on hand multiple modes of observation with differing noise-vs-cost trade-offs. This paper considers matrix completion subject to such constraints: a budget is imposed and the experimentalist's goal is to allocate this budget between two sampling modalities in order to recover an accurate low-rank approximation. Specifically, we consider that it is possible to obtain low noise, high cost observations of individual entries or high noise, low cost observations of entire columns. We introduce a regression-based completion algorithm for this setting and experimentally verify the performance of our approach on both synthetic and real data sets. When the budget is low, our algorithm outperforms standard completion algorithms. When the budget is high, our algorithm has comparable error to standard nuclear norm completion algorithms and requires much less computational effort.

Viaarxiv icon