Alert button
Picture for Elad Richardson

Elad Richardson

Alert button

ConceptLab: Creative Generation using Diffusion Prior Constraints

Aug 03, 2023
Elad Richardson, Kfir Goldberg, Yuval Alaluf, Daniel Cohen-Or

Figure 1 for ConceptLab: Creative Generation using Diffusion Prior Constraints
Figure 2 for ConceptLab: Creative Generation using Diffusion Prior Constraints
Figure 3 for ConceptLab: Creative Generation using Diffusion Prior Constraints
Figure 4 for ConceptLab: Creative Generation using Diffusion Prior Constraints

Recent text-to-image generative models have enabled us to transform our words into vibrant, captivating imagery. The surge of personalization techniques that has followed has also allowed us to imagine unique concepts in new scenes. However, an intriguing question remains: How can we generate a new, imaginary concept that has never been seen before? In this paper, we present the task of creative text-to-image generation, where we seek to generate new members of a broad category (e.g., generating a pet that differs from all existing pets). We leverage the under-studied Diffusion Prior models and show that the creative generation problem can be formulated as an optimization process over the output space of the diffusion prior, resulting in a set of "prior constraints". To keep our generated concept from converging into existing members, we incorporate a question-answering model that adaptively adds new constraints to the optimization problem, encouraging the model to discover increasingly more unique creations. Finally, we show that our prior constraints can also serve as a strong mixing mechanism allowing us to create hybrids between generated concepts, introducing even more flexibility into the creative process.

* Project page: https://kfirgoldberg.github.io/ConceptLab/ 
Viaarxiv icon

A Neural Space-Time Representation for Text-to-Image Personalization

May 24, 2023
Yuval Alaluf, Elad Richardson, Gal Metzer, Daniel Cohen-Or

Figure 1 for A Neural Space-Time Representation for Text-to-Image Personalization
Figure 2 for A Neural Space-Time Representation for Text-to-Image Personalization
Figure 3 for A Neural Space-Time Representation for Text-to-Image Personalization
Figure 4 for A Neural Space-Time Representation for Text-to-Image Personalization

A key aspect of text-to-image personalization methods is the manner in which the target concept is represented within the generative process. This choice greatly affects the visual fidelity, downstream editability, and disk space needed to store the learned concept. In this paper, we explore a new text-conditioning space that is dependent on both the denoising process timestep (time) and the denoising U-Net layers (space) and showcase its compelling properties. A single concept in the space-time representation is composed of hundreds of vectors, one for each combination of time and space, making this space challenging to optimize directly. Instead, we propose to implicitly represent a concept in this space by optimizing a small neural mapper that receives the current time and space parameters and outputs the matching token embedding. In doing so, the entire personalized concept is represented by the parameters of the learned mapper, resulting in a compact, yet expressive, representation. Similarly to other personalization methods, the output of our neural mapper resides in the input space of the text encoder. We observe that one can significantly improve the convergence and visual fidelity of the concept by introducing a textual bypass, where our neural mapper additionally outputs a residual that is added to the output of the text encoder. Finally, we show how one can impose an importance-based ordering over our implicit representation, providing users control over the reconstruction and editability of the learned concept using a single trained model. We demonstrate the effectiveness of our approach over a range of concepts and prompts, showing our method's ability to generate high-quality and controllable compositions without fine-tuning any parameters of the generative model itself.

* Project page available at https://neuraltextualinversion.github.io/NeTI/ 
Viaarxiv icon

Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes

Mar 23, 2023
Dana Cohen-Bar, Elad Richardson, Gal Metzer, Raja Giryes, Daniel Cohen-Or

Figure 1 for Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes
Figure 2 for Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes
Figure 3 for Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes
Figure 4 for Set-the-Scene: Global-Local Training for Generating Controllable NeRF Scenes

Recent breakthroughs in text-guided image generation have led to remarkable progress in the field of 3D synthesis from text. By optimizing neural radiance fields (NeRF) directly from text, recent methods are able to produce remarkable results. Yet, these methods are limited in their control of each object's placement or appearance, as they represent the scene as a whole. This can be a major issue in scenarios that require refining or manipulating objects in the scene. To remedy this deficit, we propose a novel GlobalLocal training framework for synthesizing a 3D scene using object proxies. A proxy represents the object's placement in the generated scene and optionally defines its coarse geometry. The key to our approach is to represent each object as an independent NeRF. We alternate between optimizing each NeRF on its own and as part of the full scene. Thus, a complete representation of each object can be learned, while also creating a harmonious scene with style and lighting match. We show that using proxies allows a wide variety of editing options, such as adjusting the placement of each independent object, removing objects from a scene, or refining an object. Our results show that Set-the-Scene offers a powerful solution for scene synthesis and manipulation, filling a crucial gap in controllable text-to-3D synthesis.

* project page at https://danacohen95.github.io/Set-the-Scene/ 
Viaarxiv icon

TEXTure: Text-Guided Texturing of 3D Shapes

Feb 03, 2023
Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, Daniel Cohen-Or

Figure 1 for TEXTure: Text-Guided Texturing of 3D Shapes
Figure 2 for TEXTure: Text-Guided Texturing of 3D Shapes
Figure 3 for TEXTure: Text-Guided Texturing of 3D Shapes
Figure 4 for TEXTure: Text-Guided Texturing of 3D Shapes

In this paper, we present TEXTure, a novel method for text-guided generation, editing, and transfer of textures for 3D shapes. Leveraging a pretrained depth-to-image diffusion model, TEXTure applies an iterative scheme that paints a 3D model from different viewpoints. Yet, while depth-to-image models can create plausible textures from a single viewpoint, the stochastic nature of the generation process can cause many inconsistencies when texturing an entire 3D object. To tackle these problems, we dynamically define a trimap partitioning of the rendered image into three progression states, and present a novel elaborated diffusion sampling process that uses this trimap representation to generate seamless textures from different views. We then show that one can transfer the generated texture maps to new 3D geometries without requiring explicit surface-to-surface mapping, as well as extract semantic textures from a set of images without requiring any explicit reconstruction. Finally, we show that TEXTure can be used to not only generate new textures but also edit and refine existing textures using either a text prompt or user-provided scribbles. We demonstrate that our TEXTuring method excels at generating, transferring, and editing textures through extensive evaluation, and further close the gap between 2D image generation and 3D texturing.

* Project page available at https://texturepaper.github.io/TEXTurePaper/ 
Viaarxiv icon

NeRN -- Learning Neural Representations for Neural Networks

Dec 27, 2022
Maor Ashkenazi, Zohar Rimon, Ron Vainshtein, Shir Levi, Elad Richardson, Pinchas Mintz, Eran Treister

Figure 1 for NeRN -- Learning Neural Representations for Neural Networks
Figure 2 for NeRN -- Learning Neural Representations for Neural Networks
Figure 3 for NeRN -- Learning Neural Representations for Neural Networks
Figure 4 for NeRN -- Learning Neural Representations for Neural Networks

Neural Representations have recently been shown to effectively reconstruct a wide range of signals from 3D meshes and shapes to images and videos. We show that, when adapted correctly, neural representations can be used to directly represent the weights of a pre-trained convolutional neural network, resulting in a Neural Representation for Neural Networks (NeRN). Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network based on its position in the architecture, and optimize a predictor network to map coordinates to their corresponding weights. Similarly to the spatial smoothness of visual scenes, we show that incorporating a smoothness constraint over the original network's weights aids NeRN towards a better reconstruction. In addition, since slight perturbations in pre-trained model weights can result in a considerable accuracy loss, we employ techniques from the field of knowledge distillation to stabilize the learning process. We demonstrate the effectiveness of NeRN in reconstructing widely used architectures on CIFAR-10, CIFAR-100, and ImageNet. Finally, we present two applications using NeRN, demonstrating the capabilities of the learned representations.

Viaarxiv icon

Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures

Nov 14, 2022
Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, Daniel Cohen-Or

Figure 1 for Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
Figure 2 for Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
Figure 3 for Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
Figure 4 for Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures

Text-guided image generation has progressed rapidly in recent years, inspiring major breakthroughs in text-guided shape generation. Recently, it has been shown that using score distillation, one can successfully text-guide a NeRF model to generate a 3D object. We adapt the score distillation to the publicly available, and computationally efficient, Latent Diffusion Models, which apply the entire diffusion process in a compact latent space of a pretrained autoencoder. As NeRFs operate in image space, a naive solution for guiding them with latent score distillation would require encoding to the latent space at each guidance step. Instead, we propose to bring the NeRF to the latent space, resulting in a Latent-NeRF. Analyzing our Latent-NeRF, we show that while Text-to-3D models can generate impressive results, they are inherently unconstrained and may lack the ability to guide or enforce a specific 3D structure. To assist and direct the 3D generation, we propose to guide our Latent-NeRF using a Sketch-Shape: an abstract geometry that defines the coarse structure of the desired object. Then, we present means to integrate such a constraint directly into a Latent-NeRF. This unique combination of text and shape guidance allows for increased control over the generation process. We also show that latent score distillation can be successfully applied directly on 3D meshes. This allows for generating high-quality textures on a given geometry. Our experiments validate the power of our different forms of guidance and the efficiency of using latent rendering. Implementation is available at https://github.com/eladrich/latent-nerf

Viaarxiv icon

Rethinking FUN: Frequency-Domain Utilization Networks

Dec 06, 2020
Kfir Goldberg, Stav Shapiro, Elad Richardson, Shai Avidan

Figure 1 for Rethinking FUN: Frequency-Domain Utilization Networks
Figure 2 for Rethinking FUN: Frequency-Domain Utilization Networks
Figure 3 for Rethinking FUN: Frequency-Domain Utilization Networks
Figure 4 for Rethinking FUN: Frequency-Domain Utilization Networks

The search for efficient neural network architectures has gained much focus in recent years, where modern architectures focus not only on accuracy but also on inference time and model size. Here, we present FUN, a family of novel Frequency-domain Utilization Networks. These networks utilize the inherent efficiency of the frequency-domain by working directly in that domain, represented with the Discrete Cosine Transform. Using modern techniques and building blocks such as compound-scaling and inverted-residual layers we generate a set of such networks allowing one to balance between size, latency and accuracy while outperforming competing RGB-based models. Extensive evaluations verifies that our networks present strong alternatives to previous approaches. Moreover, we show that working in frequency domain allows for dynamic compression of the input at inference time without any explicit change to the architecture.

* 9 pages, 7 figures 
Viaarxiv icon

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

Aug 03, 2020
Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, Daniel Cohen-Or

Figure 1 for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation
Figure 2 for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation
Figure 3 for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation
Figure 4 for Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

We present a generic image-to-image translation framework, Pixel2Style2Pixel (pSp). Our pSp framework is based on a novel encoder network that directly generates a series of style vectors which are fed into a pretrained StyleGAN generator, forming the extended W+ latent space. We first show that our encoder can directly embed real images into W+, with no additional optimization. We further introduce a dedicated identity loss which is shown to achieve improved performance in the reconstruction of an input image. We demonstrate pSp to be a simple architecture that, by leveraging a well-trained, fixed generator network, can be easily applied on a wide-range of image-to-image translation tasks. Solving these tasks through the style representation results in a global approach that does not rely on a local pixel-to-pixel correspondence and further supports multi-modal synthesis via the resampling of styles. Notably, we demonstrate that pSp can be trained to align a face image to a frontal pose without any labeled data, generate multi-modal results for ambiguous tasks such as conditional face generation from segmentation maps, and construct high-resolution images from corresponding low-resolution images.

Viaarxiv icon

It's All About The Scale -- Efficient Text Detection Using Adaptive Scaling

Jul 28, 2019
Elad Richardson, Yaniv Azar, Or Avioz, Niv Geron, Tomer Ronen, Zach Avraham, Stav Shapiro

Figure 1 for It's All About The Scale -- Efficient Text Detection Using Adaptive Scaling
Figure 2 for It's All About The Scale -- Efficient Text Detection Using Adaptive Scaling
Figure 3 for It's All About The Scale -- Efficient Text Detection Using Adaptive Scaling
Figure 4 for It's All About The Scale -- Efficient Text Detection Using Adaptive Scaling

"Text can appear anywhere". This property requires us to carefully process all the pixels in an image in order to accurately localize all text instances. In particular, for the more difficult task of localizing small text regions, many methods use an enlarged image or even several rescaled ones as their input. This significantly increases the processing time of the entire image and needlessly enlarges background regions. If we were to have a prior telling us the coarse location of text instances in the image and their approximate scale, we could have adaptively chosen which regions to process and how to rescale them, thus significantly reducing the processing time. To estimate this prior we propose a segmentation-based network with an additional "scale predictor", an output channel that predicts the scale of each text segment. The network is applied on a scaled down image to efficiently approximate the desired prior, without processing all the pixels of the original image. The approximated prior is then used to create a compact image containing only text regions, resized to a canonical scale, which is fed again to the segmentation network for fine-grained detection. We show that our approach offers a powerful alternative to fixed scaling schemes, achieving an equivalent accuracy to larger input scales while processing far fewer pixels. Qualitative and quantitative results are presented on the ICDAR15 and ICDAR17 MLT benchmarks to validate our approach.

Viaarxiv icon

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

Sep 15, 2017
Matan Sela, Elad Richardson, Ron Kimmel

Figure 1 for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
Figure 2 for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
Figure 3 for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation
Figure 4 for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation

It has been recently shown that neural networks can recover the geometric structure of a face from a single given image. A common denominator of most existing face geometry reconstruction methods is the restriction of the solution space to some low-dimensional subspace. While such a model significantly simplifies the reconstruction problem, it is inherently limited in its expressiveness. As an alternative, we propose an Image-to-Image translation network that jointly maps the input image to a depth image and a facial correspondence map. This explicit pixel-based mapping can then be utilized to provide high quality reconstructions of diverse faces under extreme expressions, using a purely geometric refinement process. In the spirit of recent approaches, the network is trained only with synthetic data, and is then evaluated on in-the-wild facial images. Both qualitative and quantitative analyses demonstrate the accuracy and the robustness of our approach.

* To appear in ICCV 2017 
Viaarxiv icon