Alert button
Picture for Daniel Cohen-Or

Daniel Cohen-Or

Alert button

ConceptLab: Creative Generation using Diffusion Prior Constraints

Aug 03, 2023
Elad Richardson, Kfir Goldberg, Yuval Alaluf, Daniel Cohen-Or

Figure 1 for ConceptLab: Creative Generation using Diffusion Prior Constraints
Figure 2 for ConceptLab: Creative Generation using Diffusion Prior Constraints
Figure 3 for ConceptLab: Creative Generation using Diffusion Prior Constraints
Figure 4 for ConceptLab: Creative Generation using Diffusion Prior Constraints

Recent text-to-image generative models have enabled us to transform our words into vibrant, captivating imagery. The surge of personalization techniques that has followed has also allowed us to imagine unique concepts in new scenes. However, an intriguing question remains: How can we generate a new, imaginary concept that has never been seen before? In this paper, we present the task of creative text-to-image generation, where we seek to generate new members of a broad category (e.g., generating a pet that differs from all existing pets). We leverage the under-studied Diffusion Prior models and show that the creative generation problem can be formulated as an optimization process over the output space of the diffusion prior, resulting in a set of "prior constraints". To keep our generated concept from converging into existing members, we incorporate a question-answering model that adaptively adds new constraints to the optimization problem, encouraging the model to discover increasingly more unique creations. Finally, we show that our prior constraints can also serve as a strong mixing mechanism allowing us to create hybrids between generated concepts, introducing even more flexibility into the creative process.

* Project page: https://kfirgoldberg.github.io/ConceptLab/ 
Viaarxiv icon

EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes

Jul 28, 2023
Jingyuan Yang, Qirui Huang, Tingting Ding, Dani Lischinski, Daniel Cohen-Or, Hui Huang

Figure 1 for EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes
Figure 2 for EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes
Figure 3 for EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes
Figure 4 for EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes

Visual Emotion Analysis (VEA) aims at predicting people's emotional responses to visual stimuli. This is a promising, yet challenging, task in affective computing, which has drawn increasing attention in recent years. Most of the existing work in this area focuses on feature design, while little attention has been paid to dataset construction. In this work, we introduce EmoSet, the first large-scale visual emotion dataset annotated with rich attributes, which is superior to existing datasets in four aspects: scale, annotation richness, diversity, and data balance. EmoSet comprises 3.3 million images in total, with 118,102 of these images carefully labeled by human annotators, making it five times larger than the largest existing dataset. EmoSet includes images from social networks, as well as artistic images, and it is well balanced between different emotion categories. Motivated by psychological studies, in addition to emotion category, each image is also annotated with a set of describable emotion attributes: brightness, colorfulness, scene type, object class, facial expression, and human action, which can help understand visual emotions in a precise and interpretable way. The relevance of these emotion attributes is validated by analyzing the correlations between them and visual emotion, as well as by designing an attribute module to help visual emotion recognition. We believe EmoSet will bring some key insights and encourage further research in visual emotion analysis and understanding. Project page: https://vcc.tech/EmoSet.

* Accepted to ICCV2023, similar to the final version 
Viaarxiv icon

Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models

Jul 13, 2023
Moab Arar, Rinon Gal, Yuval Atzmon, Gal Chechik, Daniel Cohen-Or, Ariel Shamir, Amit H. Bermano

Figure 1 for Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models
Figure 2 for Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models
Figure 3 for Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models
Figure 4 for Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models

Text-to-image (T2I) personalization allows users to guide the creative image generation process by combining their own visual concepts in natural language prompts. Recently, encoder-based techniques have emerged as a new effective approach for T2I personalization, reducing the need for multiple images and long training times. However, most existing encoders are limited to a single-class domain, which hinders their ability to handle diverse concepts. In this work, we propose a domain-agnostic method that does not require any specialized dataset or prior information about the personalized concepts. We introduce a novel contrastive-based regularization technique to maintain high fidelity to the target concept characteristics while keeping the predicted embeddings close to editable regions of the latent space, by pushing the predicted tokens toward their nearest existing CLIP tokens. Our experimental results demonstrate the effectiveness of our approach and show how the learned tokens are more semantic than tokens predicted by unregularized models. This leads to a better representation that achieves state-of-the-art performance while being more flexible than previous methods.

* Project page at https://datencoder.github.io 
Viaarxiv icon

Facial Reenactment Through a Personalized Generator

Jul 12, 2023
Ariel Elazary, Yotam Nitzan, Daniel Cohen-Or

Figure 1 for Facial Reenactment Through a Personalized Generator
Figure 2 for Facial Reenactment Through a Personalized Generator
Figure 3 for Facial Reenactment Through a Personalized Generator
Figure 4 for Facial Reenactment Through a Personalized Generator

In recent years, the role of image generative models in facial reenactment has been steadily increasing. Such models are usually subject-agnostic and trained on domain-wide datasets. The appearance of the reenacted individual is learned from a single image, and hence, the entire breadth of the individual's appearance is not entirely captured, leading these methods to resort to unfaithful hallucination. Thanks to recent advancements, it is now possible to train a personalized generative model tailored specifically to a given individual. In this paper, we propose a novel method for facial reenactment using a personalized generator. We train the generator using frames from a short, yet varied, self-scan video captured using a simple commodity camera. Images synthesized by the personalized generator are guaranteed to preserve identity. The premise of our work is that the task of reenactment is thus reduced to accurately mimicking head poses and expressions. To this end, we locate the desired frames in the latent space of the personalized generator using carefully designed latent optimization. Through extensive evaluation, we demonstrate state-of-the-art performance for facial reenactment. Furthermore, we show that since our reenactment takes place in a semantic latent space, it can be semantically edited and stylized in post-processing.

* Project webpage: https://arielazary.github.io/PGR/ 
Viaarxiv icon

SVNR: Spatially-variant Noise Removal with Denoising Diffusion

Jun 28, 2023
Naama Pearl, Yaron Brodsky, Dana Berman, Assaf Zomet, Alex Rav Acha, Daniel Cohen-Or, Dani Lischinski

Figure 1 for SVNR: Spatially-variant Noise Removal with Denoising Diffusion
Figure 2 for SVNR: Spatially-variant Noise Removal with Denoising Diffusion
Figure 3 for SVNR: Spatially-variant Noise Removal with Denoising Diffusion
Figure 4 for SVNR: Spatially-variant Noise Removal with Denoising Diffusion

Denoising diffusion models have recently shown impressive results in generative tasks. By learning powerful priors from huge collections of training images, such models are able to gradually modify complete noise to a clean natural image via a sequence of small denoising steps, seemingly making them well-suited for single image denoising. However, effectively applying denoising diffusion models to removal of realistic noise is more challenging than it may seem, since their formulation is based on additive white Gaussian noise, unlike noise in real-world images. In this work, we present SVNR, a novel formulation of denoising diffusion that assumes a more realistic, spatially-variant noise model. SVNR enables using the noisy input image as the starting point for the denoising diffusion process, in addition to conditioning the process on it. To this end, we adapt the diffusion process to allow each pixel to have its own time embedding, and propose training and inference schemes that support spatially-varying time maps. Our formulation also accounts for the correlation that exists between the condition image and the samples along the modified diffusion process. In our experiments we demonstrate the advantages of our approach over a strong diffusion model baseline, as well as over a state-of-the-art single image denoising method.

Viaarxiv icon

SENS: Sketch-based Implicit Neural Shape Modeling

Jun 09, 2023
Alexandre Binninger, Amir Hertz, Olga Sorkine-Hornung, Daniel Cohen-Or, Raja Giryes

Figure 1 for SENS: Sketch-based Implicit Neural Shape Modeling
Figure 2 for SENS: Sketch-based Implicit Neural Shape Modeling
Figure 3 for SENS: Sketch-based Implicit Neural Shape Modeling
Figure 4 for SENS: Sketch-based Implicit Neural Shape Modeling

We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches, including those of an abstract nature. Our method allows users to quickly and easily sketch a shape, and then maps the sketch into the latent space of a part-aware neural implicit shape architecture. SENS analyzes the sketch and encodes its parts into ViT patch encoding, then feeds them into a transformer decoder that converts them to shape embeddings, suitable for editing 3D neural implicit shapes. SENS not only provides intuitive sketch-based generation and editing, but also excels in capturing the intent of the user's sketch to generate a variety of novel and expressive 3D shapes, even from abstract sketches. We demonstrate the effectiveness of our model compared to the state-of-the-art using objective metric evaluation criteria and a decisive user study, both indicating strong performance on sketches with a medium level of abstraction. Furthermore, we showcase its intuitive sketch-based shape editing capabilities.

* 18 pages, 18 figures 
Viaarxiv icon

Concept Decomposition for Visual Exploration and Inspiration

May 31, 2023
Yael Vinker, Andrey Voynov, Daniel Cohen-Or, Ariel Shamir

Figure 1 for Concept Decomposition for Visual Exploration and Inspiration
Figure 2 for Concept Decomposition for Visual Exploration and Inspiration
Figure 3 for Concept Decomposition for Visual Exploration and Inspiration
Figure 4 for Concept Decomposition for Visual Exploration and Inspiration

A creative idea is often born from transforming, combining, and modifying ideas from existing visual examples capturing various concepts. However, one cannot simply copy the concept as a whole, and inspiration is achieved by examining certain aspects of the concept. Hence, it is often necessary to separate a concept into different aspects to provide new perspectives. In this paper, we propose a method to decompose a visual concept, represented as a set of images, into different visual aspects encoded in a hierarchical tree structure. We utilize large vision-language models and their rich latent space for concept decomposition and generation. Each node in the tree represents a sub-concept using a learned vector embedding injected into the latent space of a pretrained text-to-image model. We use a set of regularizations to guide the optimization of the embedding vectors encoded in the nodes to follow the hierarchical structure of the tree. Our method allows to explore and discover new concepts derived from the original one. The tree provides the possibility of endless visual sampling at each node, allowing the user to explore the hidden sub-concepts of the object of interest. The learned aspects in each node can be combined within and across trees to create new visual ideas, and can be used in natural language sentences to apply such aspects to new designs.

* https://inspirationtree.github.io/inspirationtree/ 
Viaarxiv icon

Break-A-Scene: Extracting Multiple Concepts from a Single Image

May 25, 2023
Omri Avrahami, Kfir Aberman, Ohad Fried, Daniel Cohen-Or, Dani Lischinski

Figure 1 for Break-A-Scene: Extracting Multiple Concepts from a Single Image
Figure 2 for Break-A-Scene: Extracting Multiple Concepts from a Single Image
Figure 3 for Break-A-Scene: Extracting Multiple Concepts from a Single Image
Figure 4 for Break-A-Scene: Extracting Multiple Concepts from a Single Image

Text-to-image model personalization aims to introduce a user-provided concept to the model, allowing its synthesis in diverse contexts. However, current methods primarily focus on the case of learning a single concept from multiple images with variations in backgrounds and poses, and struggle when adapted to a different scenario. In this work, we introduce the task of textual scene decomposition: given a single image of a scene that may contain several concepts, we aim to extract a distinct text token for each concept, enabling fine-grained control over the generated scenes. To this end, we propose augmenting the input image with masks that indicate the presence of target concepts. These masks can be provided by the user or generated automatically by a pre-trained segmentation model. We then present a novel two-phase customization process that optimizes a set of dedicated textual embeddings (handles), as well as the model weights, striking a delicate balance between accurately capturing the concepts and avoiding overfitting. We employ a masked diffusion loss to enable handles to generate their assigned concepts, complemented by a novel loss on cross-attention maps to prevent entanglement. We also introduce union-sampling, a training strategy aimed to improve the ability of combining multiple concepts in generated images. We use several automatic metrics to quantitatively compare our method against several baselines, and further affirm the results using a user study. Finally, we showcase several applications of our method. Project page is available at: https://omriavrahami.com/break-a-scene/

* Project page is available at: https://omriavrahami.com/break-a-scene/ Video available at: https://www.youtube.com/watch?v=-9EA-BhizgM 
Viaarxiv icon

A Neural Space-Time Representation for Text-to-Image Personalization

May 24, 2023
Yuval Alaluf, Elad Richardson, Gal Metzer, Daniel Cohen-Or

Figure 1 for A Neural Space-Time Representation for Text-to-Image Personalization
Figure 2 for A Neural Space-Time Representation for Text-to-Image Personalization
Figure 3 for A Neural Space-Time Representation for Text-to-Image Personalization
Figure 4 for A Neural Space-Time Representation for Text-to-Image Personalization

A key aspect of text-to-image personalization methods is the manner in which the target concept is represented within the generative process. This choice greatly affects the visual fidelity, downstream editability, and disk space needed to store the learned concept. In this paper, we explore a new text-conditioning space that is dependent on both the denoising process timestep (time) and the denoising U-Net layers (space) and showcase its compelling properties. A single concept in the space-time representation is composed of hundreds of vectors, one for each combination of time and space, making this space challenging to optimize directly. Instead, we propose to implicitly represent a concept in this space by optimizing a small neural mapper that receives the current time and space parameters and outputs the matching token embedding. In doing so, the entire personalized concept is represented by the parameters of the learned mapper, resulting in a compact, yet expressive, representation. Similarly to other personalization methods, the output of our neural mapper resides in the input space of the text encoder. We observe that one can significantly improve the convergence and visual fidelity of the concept by introducing a textual bypass, where our neural mapper additionally outputs a residual that is added to the output of the text encoder. Finally, we show how one can impose an importance-based ordering over our implicit representation, providing users control over the reconstruction and editability of the learned concept using a single trained model. We demonstrate the effectiveness of our approach over a range of concepts and prompts, showing our method's ability to generate high-quality and controllable compositions without fine-tuning any parameters of the generative model itself.

* Project page available at https://neuraltextualinversion.github.io/NeTI/ 
Viaarxiv icon