Alert button
Picture for Michal Irani

Michal Irani

Alert button

Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses

Jul 04, 2023
Gon Buzaglo, Niv Haim, Gilad Yehudai, Gal Vardi, Yakir Oz, Yaniv Nikankin, Michal Irani

Figure 1 for Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses
Figure 2 for Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses
Figure 3 for Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses
Figure 4 for Deconstructing Data Reconstruction: Multiclass, Weight Decay and General Losses

Memorization of training data is an active research area, yet our understanding of the inner workings of neural networks is still in its infancy. Recently, Haim et al. (2022) proposed a scheme to reconstruct training samples from multilayer perceptron binary classifiers, effectively demonstrating that a large portion of training samples are encoded in the parameters of such networks. In this work, we extend their findings in several directions, including reconstruction from multiclass and convolutional neural networks. We derive a more general reconstruction scheme which is applicable to a wider range of loss functions such as regression losses. Moreover, we study the various factors that contribute to networks' susceptibility to such reconstruction schemes. Intriguingly, we observe that using weight decay during training increases reconstructability both in terms of quantity and quality. Additionally, we examine the influence of the number of neurons relative to the number of training samples on the reconstructability.

* arXiv admin note: text overlap with arXiv:2305.03350 
Viaarxiv icon

The Hidden Language of Diffusion Models

Jun 06, 2023
Hila Chefer, Oran Lang, Mor Geva, Volodymyr Polosukhin, Assaf Shocher, Michal Irani, Inbar Mosseri, Lior Wolf

Figure 1 for The Hidden Language of Diffusion Models
Figure 2 for The Hidden Language of Diffusion Models
Figure 3 for The Hidden Language of Diffusion Models
Figure 4 for The Hidden Language of Diffusion Models

Text-to-image diffusion models have demonstrated an unparalleled ability to generate high-quality, diverse images from a textual concept (e.g., "a doctor", "love"). However, the internal process of mapping text to a rich visual representation remains an enigma. In this work, we tackle the challenge of understanding concept representations in text-to-image models by decomposing an input text prompt into a small set of interpretable elements. This is achieved by learning a pseudo-token that is a sparse weighted combination of tokens from the model's vocabulary, with the objective of reconstructing the images generated for the given concept. Applied over the state-of-the-art Stable Diffusion model, this decomposition reveals non-trivial and surprising structures in the representations of concepts. For example, we find that some concepts such as "a president" or "a composer" are dominated by specific instances (e.g., "Obama", "Biden") and their interpolations. Other concepts, such as "happiness" combine associated terms that can be concrete ("family", "laughter") or abstract ("friendship", "emotion"). In addition to peering into the inner workings of Stable Diffusion, our method also enables applications such as single-image decomposition to tokens, bias detection and mitigation, and semantic image manipulation. Our code will be available at: https://hila-chefer.github.io/Conceptor/

Viaarxiv icon

Reconstructing Training Data from Multiclass Neural Networks

May 05, 2023
Gon Buzaglo, Niv Haim, Gilad Yehudai, Gal Vardi, Michal Irani

Figure 1 for Reconstructing Training Data from Multiclass Neural Networks
Figure 2 for Reconstructing Training Data from Multiclass Neural Networks
Figure 3 for Reconstructing Training Data from Multiclass Neural Networks
Figure 4 for Reconstructing Training Data from Multiclass Neural Networks

Reconstructing samples from the training set of trained neural networks is a major privacy concern. Haim et al. (2022) recently showed that it is possible to reconstruct training samples from neural network binary classifiers, based on theoretical results about the implicit bias of gradient methods. In this work, we present several improvements and new insights over this previous work. As our main improvement, we show that training-data reconstruction is possible in the multi-class setting and that the reconstruction quality is even higher than in the case of binary classification. Moreover, we show that using weight-decay during training increases the vulnerability to sample reconstruction. Finally, while in the previous work the training set was of size at most $1000$ from $10$ classes, we show preliminary evidence of the ability to reconstruct from a model trained on $5000$ samples from $100$ classes.

Viaarxiv icon

Teaching CLIP to Count to Ten

Feb 23, 2023
Roni Paiss, Ariel Ephrat, Omer Tov, Shiran Zada, Inbar Mosseri, Michal Irani, Tali Dekel

Figure 1 for Teaching CLIP to Count to Ten
Figure 2 for Teaching CLIP to Count to Ten
Figure 3 for Teaching CLIP to Count to Ten
Figure 4 for Teaching CLIP to Count to Ten

Large vision-language models (VLMs), such as CLIP, learn rich joint image-text representations, facilitating advances in numerous downstream tasks, including zero-shot classification and text-to-image generation. Nevertheless, existing VLMs exhibit a prominent well-documented limitation - they fail to encapsulate compositional concepts such as counting. We introduce a simple yet effective method to improve the quantitative understanding of VLMs, while maintaining their overall performance on common benchmarks. Specifically, we propose a new counting-contrastive loss used to finetune a pre-trained VLM in tandem with its original objective. Our counting loss is deployed over automatically-created counterfactual examples, each consisting of an image and a caption containing an incorrect object count. For example, an image depicting three dogs is paired with the caption "Six dogs playing in the yard". Our loss encourages discrimination between the correct caption and its counterfactual variant which serves as a hard negative example. To the best of our knowledge, this work is the first to extend CLIP's capabilities to object counting. Furthermore, we introduce "CountBench" - a new image-text counting benchmark for evaluating a model's understanding of object counting. We demonstrate a significant improvement over state-of-the-art baseline models on this task. Finally, we leverage our count-aware CLIP model for image retrieval and text-conditioned image generation, demonstrating that our model can produce specific counts of objects more reliably than existing ones.

Viaarxiv icon

SinFusion: Training Diffusion Models on a Single Image or Video

Nov 21, 2022
Yaniv Nikankin, Niv Haim, Michal Irani

Figure 1 for SinFusion: Training Diffusion Models on a Single Image or Video
Figure 2 for SinFusion: Training Diffusion Models on a Single Image or Video
Figure 3 for SinFusion: Training Diffusion Models on a Single Image or Video
Figure 4 for SinFusion: Training Diffusion Models on a Single Image or Video

Diffusion models exhibited tremendous progress in image and video generation, exceeding GANs in quality and diversity. However, they are usually trained on very large datasets and are not naturally adapted to manipulate a given input image or video. In this paper we show how this can be resolved by training a diffusion model on a single input image or video. Our image/video-specific diffusion model (SinFusion) learns the appearance and dynamics of the single image or video, while utilizing the conditioning capabilities of diffusion models. It can solve a wide array of image/video-specific manipulation tasks. In particular, our model can learn from few frames the motion and dynamics of a single input video. It can then generate diverse new video samples of the same dynamic scene, extrapolate short videos into long ones (both forward and backward in time) and perform video upsampling. When trained on a single image, our model shows comparable performance and capabilities to previous single-image models in various image manipulation tasks.

* Project Page: https://yanivnik.github.io/sinfusion 
Viaarxiv icon

Imagic: Text-Based Real Image Editing with Diffusion Models

Oct 17, 2022
Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, Michal Irani

Figure 1 for Imagic: Text-Based Real Image Editing with Diffusion Models
Figure 2 for Imagic: Text-Based Real Image Editing with Diffusion Models
Figure 3 for Imagic: Text-Based Real Image Editing with Diffusion Models
Figure 4 for Imagic: Text-Based Real Image Editing with Diffusion Models

Text-conditioned image editing has recently attracted considerable interest. However, most methods are currently either limited to specific editing types (e.g., object overlay, style transfer), or apply to synthetically generated images, or require multiple input images of a common object. In this paper we demonstrate, for the very first time, the ability to apply complex (e.g., non-rigid) text-guided semantic edits to a single real image. For example, we can change the posture and composition of one or multiple objects inside an image, while preserving its original characteristics. Our method can make a standing dog sit down or jump, cause a bird to spread its wings, etc. -- each within its single high-resolution natural image provided by the user. Contrary to previous work, our proposed method requires only a single input image and a target text (the desired edit). It operates on real images, and does not require any additional inputs (such as image masks or additional views of the object). Our method, which we call "Imagic", leverages a pre-trained text-to-image diffusion model for this task. It produces a text embedding that aligns with both the input image and the target text, while fine-tuning the diffusion model to capture the image-specific appearance. We demonstrate the quality and versatility of our method on numerous inputs from various domains, showcasing a plethora of high quality complex semantic image edits, all within a single unified framework.

Viaarxiv icon

Combining Internal and External Constraints for Unrolling Shutter in Videos

Jul 24, 2022
Eyal Naor, Itai Antebi, Shai Bagon, Michal Irani

Figure 1 for Combining Internal and External Constraints for Unrolling Shutter in Videos
Figure 2 for Combining Internal and External Constraints for Unrolling Shutter in Videos
Figure 3 for Combining Internal and External Constraints for Unrolling Shutter in Videos
Figure 4 for Combining Internal and External Constraints for Unrolling Shutter in Videos

Videos obtained by rolling-shutter (RS) cameras result in spatially-distorted frames. These distortions become significant under fast camera/scene motions. Undoing effects of RS is sometimes addressed as a spatial problem, where objects need to be rectified/displaced in order to generate their correct global shutter (GS) frame. However, the cause of the RS effect is inherently temporal, not spatial. In this paper we propose a space-time solution to the RS problem. We observe that despite the severe differences between their xy frames, a RS video and its corresponding GS video tend to share the exact same xt slices -- up to a known sub-frame temporal shift. Moreover, they share the same distribution of small 2D xt-patches, despite the strong temporal aliasing within each video. This allows to constrain the GS output video using video-specific constraints imposed by the RS input video. Our algorithm is composed of 3 main components: (i) Dense temporal upsampling between consecutive RS frames using an off-the-shelf method, (which was trained on regular video sequences), from which we extract GS "proposals". (ii) Learning to correctly merge an ensemble of such GS "proposals" using a dedicated MergeNet. (iii) A video-specific zero-shot optimization which imposes the similarity of xt-patches between the GS output video and the RS input video. Our method obtains state-of-the-art results on benchmark datasets, both numerically and visually, despite being trained on a small synthetic RS/GS dataset. Moreover, it generalizes well to new complex RS videos with motion types outside the distribution of the training set (e.g., complex non-rigid motions) -- videos which competing methods trained on much more data cannot handle well. We attribute these generalization capabilities to the combination of external and internal constraints.

* Accepted to ECCV 2022 
Viaarxiv icon

Reconstructing Training Data from Trained Neural Networks

Jun 15, 2022
Niv Haim, Gal Vardi, Gilad Yehudai, Ohad Shamir, Michal Irani

Figure 1 for Reconstructing Training Data from Trained Neural Networks
Figure 2 for Reconstructing Training Data from Trained Neural Networks
Figure 3 for Reconstructing Training Data from Trained Neural Networks
Figure 4 for Reconstructing Training Data from Trained Neural Networks

Understanding to what extent neural networks memorize training data is an intriguing question with practical and theoretical implications. In this paper we show that in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier. We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods. To the best of our knowledge, our results are the first to show that reconstructing a large portion of the actual training samples from a trained neural network classifier is generally possible. This has negative implications on privacy, as it can be used as an attack for revealing sensitive training data. We demonstrate our method for binary MLP classifiers on a few standard computer vision datasets.

Viaarxiv icon

A Penny for Your (visual) Thoughts: Self-Supervised Reconstruction of Natural Movies from Brain Activity

Jun 10, 2022
Ganit Kupershmidt, Roman Beliy, Guy Gaziv, Michal Irani

Figure 1 for A Penny for Your (visual) Thoughts: Self-Supervised Reconstruction of Natural Movies from Brain Activity
Figure 2 for A Penny for Your (visual) Thoughts: Self-Supervised Reconstruction of Natural Movies from Brain Activity
Figure 3 for A Penny for Your (visual) Thoughts: Self-Supervised Reconstruction of Natural Movies from Brain Activity
Figure 4 for A Penny for Your (visual) Thoughts: Self-Supervised Reconstruction of Natural Movies from Brain Activity

Reconstructing natural videos from fMRI brain recordings is very challenging, for two main reasons: (i) As fMRI data acquisition is difficult, we only have a limited amount of supervised samples, which is not enough to cover the huge space of natural videos; and (ii) The temporal resolution of fMRI recordings is much lower than the frame rate of natural videos. In this paper, we propose a self-supervised approach for natural-movie reconstruction. By employing cycle-consistency over Encoding-Decoding natural videos, we can: (i) exploit the full framerate of the training videos, and not be limited only to clips that correspond to fMRI recordings; (ii) exploit massive amounts of external natural videos which the subjects never saw inside the fMRI machine. These enable increasing the applicable training data by several orders of magnitude, introducing natural video priors to the decoding network, as well as temporal coherence. Our approach significantly outperforms competing methods, since those train only on the limited supervised data. We further introduce a new and simple temporal prior of natural videos, which - when folded into our fMRI decoder further - allows us to reconstruct videos at a higher frame-rate (HFR) of up to x8 of the original fMRI sample rate.

Viaarxiv icon