Alert button
Picture for Stefano Soatto

Stefano Soatto

Alert button

Meaning Representations from Trajectories in Autoregressive Models

Nov 02, 2023
Tian Yu Liu, Matthew Trager, Alessandro Achille, Pramuditha Perera, Luca Zancato, Stefano Soatto

We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text. This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model. Moreover, unlike vector-based representations, distribution-based representations can also model asymmetric relations (e.g., direction of logical entailment, hypernym/hyponym relations) by using algebraic operations between likelihood functions. These ideas are grounded in distributional perspectives on semantics and are connected to standard constructions in automata theory, but to our knowledge they have not been applied to modern language models. We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle. Finally, we extend our method to represent data from different modalities (e.g., image and text) using multimodal autoregressive models.

Viaarxiv icon

AugUndo: Scaling Up Augmentations for Unsupervised Depth Completion

Oct 15, 2023
Yangchao Wu, Tian Yu Liu, Hyoungseob Park, Stefano Soatto, Dong Lao, Alex Wong

Figure 1 for AugUndo: Scaling Up Augmentations for Unsupervised Depth Completion
Figure 2 for AugUndo: Scaling Up Augmentations for Unsupervised Depth Completion
Figure 3 for AugUndo: Scaling Up Augmentations for Unsupervised Depth Completion
Figure 4 for AugUndo: Scaling Up Augmentations for Unsupervised Depth Completion

Unsupervised depth completion methods are trained by minimizing sparse depth and image reconstruction error. Block artifacts from resampling, intensity saturation, and occlusions are amongst the many undesirable by-products of common data augmentation schemes that affect image reconstruction quality, and thus the training signal. Hence, typical augmentations on images that are viewed as essential to training pipelines in other vision tasks have seen limited use beyond small image intensity changes and flipping. The sparse depth modality have seen even less as intensity transformations alter the scale of the 3D scene, and geometric transformations may decimate the sparse points during resampling. We propose a method that unlocks a wide range of previously-infeasible geometric augmentations for unsupervised depth completion. This is achieved by reversing, or "undo"-ing, geometric transformations to the coordinates of the output depth, warping the depth map back to the original reference frame. This enables computing the reconstruction losses using the original images and sparse depth maps, eliminating the pitfalls of naive loss computation on the augmented inputs. This simple yet effective strategy allows us to scale up augmentations to boost performance. We demonstrate our method on indoor (VOID) and outdoor (KITTI) datasets where we improve upon three existing methods by an average of 10.4\% across both datasets.

Viaarxiv icon

Sub-token ViT Embedding via Stochastic Resonance Transformers

Oct 06, 2023
Dong Lao, Yangchao Wu, Tian Yu Liu, Alex Wong, Stefano Soatto

Figure 1 for Sub-token ViT Embedding via Stochastic Resonance Transformers
Figure 2 for Sub-token ViT Embedding via Stochastic Resonance Transformers
Figure 3 for Sub-token ViT Embedding via Stochastic Resonance Transformers
Figure 4 for Sub-token ViT Embedding via Stochastic Resonance Transformers

We discover the presence of quantization artifacts in Vision Transformers (ViTs), which arise due to the image tokenization step inherent in these architectures. These artifacts result in coarsely quantized features, which negatively impact performance, especially on downstream dense prediction tasks. We present a zero-shot method to improve how pre-trained ViTs handle spatial quantization. In particular, we propose to ensemble the features obtained from perturbing input images via sub-token spatial translations, inspired by Stochastic Resonance, a method traditionally applied to climate dynamics and signal processing. We term our method ``Stochastic Resonance Transformer" (SRT), which we show can effectively super-resolve features of pre-trained ViTs, capturing more of the local fine-grained structures that might otherwise be neglected as a result of tokenization. SRT can be applied at any layer, on any task, and does not require any fine-tuning. The advantage of the former is evident when applied to monocular depth prediction, where we show that ensembling model outputs are detrimental while applying SRT on intermediate ViT features outperforms the baseline models by an average of 4.7% and 14.9% on the RMSE and RMSE-log metrics across three different architectures. When applied to semi-supervised video object segmentation, SRT also improves over the baseline models uniformly across all metrics, and by an average of 2.4% in F&J score. We further show that these quantization artifacts can be attenuated to some extent via self-distillation. On the unsupervised salient region segmentation, SRT improves upon the base model by an average of 2.1% on the maxF metric. Finally, despite operating purely on pixel-level features, SRT generalizes to non-dense prediction tasks such as image retrieval and object discovery, yielding consistent improvements of up to 2.6% and 1.0% respectively.

Viaarxiv icon

Critical Learning Periods Emerge Even in Deep Linear Networks

Aug 23, 2023
Michael Kleinman, Alessandro Achille, Stefano Soatto

Figure 1 for Critical Learning Periods Emerge Even in Deep Linear Networks
Figure 2 for Critical Learning Periods Emerge Even in Deep Linear Networks
Figure 3 for Critical Learning Periods Emerge Even in Deep Linear Networks
Figure 4 for Critical Learning Periods Emerge Even in Deep Linear Networks

Critical learning periods are periods early in development where temporary sensory deficits can have a permanent effect on behavior and learned representations. Despite the radical differences between biological and artificial networks, critical learning periods have been empirically observed in both systems. This suggests that critical periods may be fundamental to learning and not an accident of biology. Yet, why exactly critical periods emerge in deep networks is still an open question, and in particular it is unclear whether the critical periods observed in both systems depend on particular architectural or optimization details. To isolate the key underlying factors, we focus on deep linear network models, and show that, surprisingly, such networks also display much of the behavior seen in biology and artificial networks, while being amenable to analytical treatment. We show that critical periods depend on the depth of the model and structure of the data distribution. We also show analytically and in simulations that the learning of features is tied to competition between sources. Finally, we extend our analysis to multi-task learning to show that pre-training on certain tasks can damage the transfer performance on new tasks, and show how this depends on the relationship between tasks and the duration of the pre-training stage. To the best of our knowledge, our work provides the first analytically tractable model that sheds light into why critical learning periods emerge in biological and artificial networks.

Viaarxiv icon

Training Data Protection with Compositional Diffusion Models

Aug 02, 2023
Aditya Golatkar, Alessandro Achille, Ashwin Swaminathan, Stefano Soatto

Figure 1 for Training Data Protection with Compositional Diffusion Models
Figure 2 for Training Data Protection with Compositional Diffusion Models
Figure 3 for Training Data Protection with Compositional Diffusion Models
Figure 4 for Training Data Protection with Compositional Diffusion Models

We introduce Compartmentalized Diffusion Models (CDM), a method to train different diffusion models (or prompts) on distinct data sources and arbitrarily compose them at inference time. The individual models can be trained in isolation, at different times, and on different distributions and domains and can be later composed to achieve performance comparable to a paragon model trained on all data simultaneously. Furthermore, each model only contains information about the subset of the data it was exposed to during training, enabling several forms of training data protection. In particular, CDMs are the first method to enable both selective forgetting and continual learning for large-scale diffusion models, as well as allowing serving customized models based on the user's access rights. CDMs also allow determining the importance of a subset of the data in generating particular samples.

Viaarxiv icon

Tangent Transformers for Composition, Privacy and Removal

Jul 20, 2023
Tian Yu Liu, Aditya Golatkar, Stefano Soatto

Figure 1 for Tangent Transformers for Composition, Privacy and Removal
Figure 2 for Tangent Transformers for Composition, Privacy and Removal
Figure 3 for Tangent Transformers for Composition, Privacy and Removal
Figure 4 for Tangent Transformers for Composition, Privacy and Removal

We introduce Tangent Attention Fine-Tuning (TAFT), a method for fine-tuning linearized transformers obtained by computing a First-order Taylor Expansion around a pre-trained initialization. We show that the Jacobian-Vector Product resulting from linearization can be computed efficiently in a single forward pass, reducing training and inference cost to the same order of magnitude as its original non-linear counterpart, while using the same number of parameters. Furthermore, we show that, when applied to various downstream visual classification tasks, the resulting Tangent Transformer fine-tuned with TAFT can perform comparably with fine-tuning the original non-linear network. Since Tangent Transformers are linear with respect to the new set of weights, and the resulting fine-tuning loss is convex, we show that TAFT enjoys several advantages compared to non-linear fine-tuning when it comes to model composition, parallel training, machine unlearning, and differential privacy.

Viaarxiv icon

Tangent Model Composition for Ensembling and Continual Fine-tuning

Jul 16, 2023
Tian Yu Liu, Stefano Soatto

Figure 1 for Tangent Model Composition for Ensembling and Continual Fine-tuning
Figure 2 for Tangent Model Composition for Ensembling and Continual Fine-tuning
Figure 3 for Tangent Model Composition for Ensembling and Continual Fine-tuning
Figure 4 for Tangent Model Composition for Ensembling and Continual Fine-tuning

Tangent Model Composition (TMC) is a method to combine component models independently fine-tuned around a pre-trained point. Component models are tangent vectors to the pre-trained model that can be added, scaled, or subtracted to support incremental learning, ensembling, or unlearning. Component models are composed at inference time via scalar combination, reducing the cost of ensembling to that of a single model. TMC improves accuracy by 4.2% compared to ensembling non-linearly fine-tuned models at a 2.5x to 10x reduction of inference cost, growing linearly with the number of component models. Each component model can be forgotten at zero cost, with no residual effect on the resulting inference. When used for continual fine-tuning, TMC is not constrained by sequential bias and can be executed in parallel on federated data. TMC outperforms recently published continual fine-tuning methods almost uniformly on each setting -- task-incremental, class-incremental, and data-incremental -- on a total of 13 experiments across 3 benchmark datasets, despite not using any replay buffer. TMC is designed for composing models that are local to a pre-trained embedding, but could be extended to more general settings.

Viaarxiv icon

Towards Visual Foundational Models of Physical Scenes

Jun 06, 2023
Chethan Parameshwara, Alessandro Achille, Matthew Trager, Xiaolong Li, Jiawei Mo, Matthew Trager, Ashwin Swaminathan, CJ Taylor, Dheera Venkatraman, Xiaohan Fei, Stefano Soatto

Figure 1 for Towards Visual Foundational Models of Physical Scenes
Figure 2 for Towards Visual Foundational Models of Physical Scenes
Figure 3 for Towards Visual Foundational Models of Physical Scenes
Figure 4 for Towards Visual Foundational Models of Physical Scenes

We describe a first step towards learning general-purpose visual representations of physical scenes using only image prediction as a training criterion. To do so, we first define "physical scene" and show that, even though different agents may maintain different representations of the same scene, the underlying physical scene that can be inferred is unique. Then, we show that NeRFs cannot represent the physical scene, as they lack extrapolation mechanisms. Those, however, could be provided by Diffusion Models, at least in theory. To test this hypothesis empirically, NeRFs can be combined with Diffusion Models, a process we refer to as NeRF Diffusion, used as unsupervised representations of the physical scene. Our analysis is limited to visual data, without external grounding mechanisms that can be provided by independent sensory modalities.

* TLDR: Physical scenes are equivalence classes of sufficient statistics, and can be inferred uniquely by any agent measuring the same finite data; We formalize and implement an approach to representation learning that overturns "naive realism" in favor of an analytical approach of Russell and Koenderink. NeRFs cannot capture the physical scenes, but combined with Diffusion Models they can 
Viaarxiv icon