Alert button
Picture for Dingli Yu

Dingli Yu

Alert button

New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound

Nov 05, 2022
Arushi Gupta, Nikunj Saunshi, Dingli Yu, Kaifeng Lyu, Sanjeev Arora

Figure 1 for New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Figure 2 for New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Figure 3 for New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
Figure 4 for New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound

Saliency methods compute heat maps that highlight portions of an input that were most {\em important} for the label assigned to it by a deep net. Evaluations of saliency methods convert this heat map into a new {\em masked input} by retaining the $k$ highest-ranked pixels of the original input and replacing the rest with \textquotedblleft uninformative\textquotedblright\ pixels, and checking if the net's output is mostly unchanged. This is usually seen as an {\em explanation} of the output, but the current paper highlights reasons why this inference of causality may be suspect. Inspired by logic concepts of {\em completeness \& soundness}, it observes that the above type of evaluation focuses on completeness of the explanation, but ignores soundness. New evaluation metrics are introduced to capture both notions, while staying in an {\em intrinsic} framework -- i.e., using the dataset and the net, but no separately trained nets, human evaluations, etc. A simple saliency method is described that matches or outperforms prior methods in the evaluations. Experiments also suggest new intrinsic justifications, based on soundness, for popular heuristic tricks such as TV regularization and upsampling.

* NeurIPS 2022 (Oral) 
Viaarxiv icon

A Kernel-Based View of Language Model Fine-Tuning

Oct 11, 2022
Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora

Figure 1 for A Kernel-Based View of Language Model Fine-Tuning
Figure 2 for A Kernel-Based View of Language Model Fine-Tuning
Figure 3 for A Kernel-Based View of Language Model Fine-Tuning
Figure 4 for A Kernel-Based View of Language Model Fine-Tuning

It has become standard to solve NLP tasks by fine-tuning pre-trained language models (LMs), especially in low-data settings. There is minimal theoretical understanding of empirical success, e.g., why fine-tuning a model with $10^8$ or more parameters on a couple dozen training points does not result in overfitting. We investigate whether the Neural Tangent Kernel (NTK) - which originated as a model to study the gradient descent dynamics of infinitely wide networks with suitable random initialization - describes fine-tuning of pre-trained LMs. This study was inspired by the decent performance of NTK for computer vision tasks (Wei et al., 2022). We also extend the NTK formalism to fine-tuning with Adam. We present extensive experiments that show that once the downstream task is formulated as a language modeling problem through prompting, the NTK lens can often reasonably describe the model updates during fine-tuning with both SGD and Adam. This kernel view also suggests an explanation for success of parameter-efficient subspace-based fine-tuning methods. Finally, we suggest a path toward a formal explanation for our findings via Tensor Programs (Yang, 2020).

* Code and pre-computed kernels are publicly available at https://github.com/princeton-nlp/LM-Kernel-FT 
Viaarxiv icon

Enhanced Convolutional Neural Tangent Kernels

Nov 03, 2019
Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S. Du, Wei Hu, Ruslan Salakhutdinov, Sanjeev Arora

Figure 1 for Enhanced Convolutional Neural Tangent Kernels
Figure 2 for Enhanced Convolutional Neural Tangent Kernels
Figure 3 for Enhanced Convolutional Neural Tangent Kernels
Figure 4 for Enhanced Convolutional Neural Tangent Kernels

Recent research shows that for training with $\ell_2$ loss, convolutional neural networks (CNNs) whose width (number of channels in convolutional layers) goes to infinity correspond to regression with respect to the CNN Gaussian Process kernel (CNN-GP) if only the last layer is trained, and correspond to regression with respect to the Convolutional Neural Tangent Kernel (CNTK) if all layers are trained. An exact algorithm to compute CNTK (Arora et al., 2019) yielded the finding that classification accuracy of CNTK on CIFAR-10 is within 6-7% of that of that of the corresponding CNN architecture (best figure being around 78%) which is interesting performance for a fixed kernel. Here we show how to significantly enhance the performance of these kernels using two ideas. (1) Modifying the kernel using a new operation called Local Average Pooling (LAP) which preserves efficient computability of the kernel and inherits the spirit of standard data augmentation using pixel shifts. Earlier papers were unable to incorporate naive data augmentation because of the quadratic training cost of kernel regression. This idea is inspired by Global Average Pooling (GAP), which we show for CNN-GP and CNTK is equivalent to full translation data augmentation. (2) Representing the input image using a pre-processing technique proposed by Coates et al. (2011), which uses a single convolutional layer composed of random image patches. On CIFAR-10, the resulting kernel, CNN-GP with LAP and horizontal flip data augmentation, achieves 89% accuracy, matching the performance of AlexNet (Krizhevsky et al., 2012). Note that this is the best such result we know of for a classifier that is not a trained neural network. Similar improvements are obtained for Fashion-MNIST.

Viaarxiv icon

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

Oct 27, 2019
Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, Dingli Yu

Figure 1 for Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Figure 2 for Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Figure 3 for Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Figure 4 for Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

Recent research shows that the following two models are equivalent: (a) infinitely wide neural networks (NNs) trained under l2 loss by gradient descent with infinitesimally small learning rate (b) kernel regression with respect to so-called Neural Tangent Kernels (NTKs) (Jacot et al., 2018). An efficient algorithm to compute the NTK, as well as its convolutional counterparts, appears in Arora et al. (2019a), which allowed studying performance of infinitely wide nets on datasets like CIFAR-10. However, super-quadratic running time of kernel methods makes them best suited for small-data tasks. We report results suggesting neural tangent kernels perform strongly on low-data tasks. 1. On a standard testbed of classification/regression tasks from the UCI database, NTK SVM beats the previous gold standard, Random Forests (RF), and also the corresponding finite nets. 2. On CIFAR-10 with 10 - 640 training samples, Convolutional NTK consistently beats ResNet-34 by 1% - 3%. 3. On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance. 4. Comparing the performance of NTK with the finite-width net it was derived from, NTK behavior starts at lower net widths than suggested by theoretical analysis(Arora et al., 2019a). NTK's efficacy may trace to lower variance of output.

* Code for UCI experiments: https://github.com/LeoYu/neural-tangent-kernel-UCI 
Viaarxiv icon

Understanding Generalization of Deep Neural Networks Trained with Noisy Labels

May 29, 2019
Wei Hu, Zhiyuan Li, Dingli Yu

Figure 1 for Understanding Generalization of Deep Neural Networks Trained with Noisy Labels
Figure 2 for Understanding Generalization of Deep Neural Networks Trained with Noisy Labels
Figure 3 for Understanding Generalization of Deep Neural Networks Trained with Noisy Labels
Figure 4 for Understanding Generalization of Deep Neural Networks Trained with Noisy Labels

Over-parameterized deep neural networks trained by simple first-order methods are known to be able to fit any labeling of data. Such over-fitting ability hinders generalization when mislabeled training examples are present. On the other hand, simple regularization methods like early-stopping seem to help generalization a lot in these scenarios. This paper makes progress towards theoretically explaining generalization of over-parameterized deep neural networks trained with noisy labels. Two simple regularization methods are analyzed: (i) regularization by the distance between the network parameters to initialization, and (ii) adding a trainable auxiliary variable to the network output for each training example. Theoretically, we prove that gradient descent training with either of these two methods leads to a generalization guarantee on the true data distribution despite being trained using noisy labels. The generalization bound is independent of the network size, and is comparable to the bound one can get when there is no label noise. Empirical results verify the effectiveness of these methods on noisily labeled datasets.

Viaarxiv icon