Alert button
Picture for Rohit Gandikota

Rohit Gandikota

Alert button

Concept Sliders: LoRA Adaptors for Precise Control in Diffusion Models

Nov 27, 2023
Rohit Gandikota, Joanna Materzynska, Tingrui Zhou, Antonio Torralba, David Bau

We present a method to create interpretable concept sliders that enable precise control over attributes in image generations from diffusion models. Our approach identifies a low-rank parameter direction corresponding to one concept while minimizing interference with other attributes. A slider is created using a small set of prompts or sample images; thus slider directions can be created for either textual or visual concepts. Concept Sliders are plug-and-play: they can be composed efficiently and continuously modulated, enabling precise control over image generation. In quantitative experiments comparing to previous editing techniques, our sliders exhibit stronger targeted edits with lower interference. We showcase sliders for weather, age, styles, and expressions, as well as slider compositions. We show how sliders can transfer latents from StyleGAN for intuitive editing of visual concepts for which textual description is difficult. We also find that our method can help address persistent quality issues in Stable Diffusion XL including repair of object deformations and fixing distorted hands. Our code, data, and trained sliders are available at https://sliders.baulab.info/

Viaarxiv icon

Unified Concept Editing in Diffusion Models

Aug 25, 2023
Rohit Gandikota, Hadas Orgad, Yonatan Belinkov, Joanna Materzyńska, David Bau

Figure 1 for Unified Concept Editing in Diffusion Models
Figure 2 for Unified Concept Editing in Diffusion Models
Figure 3 for Unified Concept Editing in Diffusion Models
Figure 4 for Unified Concept Editing in Diffusion Models

Text-to-image models suffer from various safety issues that may limit their suitability for deployment. Previous methods have separately addressed individual issues of bias, copyright, and offensive content in text-to-image models. However, in the real world, all of these issues appear simultaneously in the same model. We present a method that tackles all issues with a single approach. Our method, Unified Concept Editing (UCE), edits the model without training using a closed-form solution, and scales seamlessly to concurrent edits on text-conditional diffusion models. We demonstrate scalable simultaneous debiasing, style erasure, and content moderation by editing text-to-image projections, and we present extensive experiments demonstrating improved efficacy and scalability over prior work. Our code is available at https://unified.baulab.info

Viaarxiv icon

Erasing Concepts from Diffusion Models

Mar 16, 2023
Rohit Gandikota, Joanna Materzynska, Jaden Fiotto-Kaufman, David Bau

Figure 1 for Erasing Concepts from Diffusion Models
Figure 2 for Erasing Concepts from Diffusion Models
Figure 3 for Erasing Concepts from Diffusion Models
Figure 4 for Erasing Concepts from Diffusion Models

Motivated by recent advancements in text-to-image diffusion, we study erasure of specific concepts from the model's weights. While Stable Diffusion has shown promise in producing explicit or realistic artwork, it has raised concerns regarding its potential for misuse. We propose a fine-tuning method that can erase a visual concept from a pre-trained diffusion model, given only the name of the style and using negative guidance as a teacher. We benchmark our method against previous approaches that remove sexually explicit content and demonstrate its effectiveness, performing on par with Safe Latent Diffusion and censored training. To evaluate artistic style removal, we conduct experiments erasing five modern artists from the network and conduct a user study to assess the human perception of the removed styles. Unlike previous methods, our approach can remove concepts from a diffusion model permanently rather than modifying the output at the inference time, so it cannot be circumvented even if a user has access to model weights. Our code, data, and results are available at https://erasing.baulab.info/

Viaarxiv icon

DC-Art-GAN: Stable Procedural Content Generation using DC-GANs for Digital Art

Sep 06, 2022
Rohit Gandikota, Nik Bear Brown

Figure 1 for DC-Art-GAN: Stable Procedural Content Generation using DC-GANs for Digital Art
Figure 2 for DC-Art-GAN: Stable Procedural Content Generation using DC-GANs for Digital Art
Figure 3 for DC-Art-GAN: Stable Procedural Content Generation using DC-GANs for Digital Art
Figure 4 for DC-Art-GAN: Stable Procedural Content Generation using DC-GANs for Digital Art

Art is an artistic method of using digital technologies as a part of the generative or creative process. With the advent of digital currency and NFTs (Non-Fungible Token), the demand for digital art is growing aggressively. In this manuscript, we advocate the concept of using deep generative networks with adversarial training for a stable and variant art generation. The work mainly focuses on using the Deep Convolutional Generative Adversarial Network (DC-GAN) and explores the techniques to address the common pitfalls in GAN training. We compare various architectures and designs of DC-GANs to arrive at a recommendable design choice for a stable and realistic generation. The main focus of the work is to generate realistic images that do not exist in reality but are synthesised from random noise by the proposed model. We provide visual results of generated animal face images (some pieces of evidence showing a blend of species) along with recommendations for training, architecture and design choices. We also show how training image preprocessing plays a massive role in GAN training.

Viaarxiv icon

Computer Vision for Autonomous Vehicles

Dec 06, 2018
Rohit Gandikota

Figure 1 for Computer Vision for Autonomous Vehicles
Figure 2 for Computer Vision for Autonomous Vehicles
Figure 3 for Computer Vision for Autonomous Vehicles
Figure 4 for Computer Vision for Autonomous Vehicles

In this work, we try to implement Image Processing techniques in the area of autonomous vehicles, both indoor and outdoor. The challenges for both are different and the ways to tackle them vary too. We also showed deep learning makes things easier and precise. We also made base models for all the problems we tackle while building an autonomous car for Indian Institute of Space science and Technology.

Viaarxiv icon

How You See Me

Nov 20, 2018
Rohit Gandikota, Deepak Mishra

Figure 1 for How You See Me
Figure 2 for How You See Me
Figure 3 for How You See Me
Figure 4 for How You See Me

Convolution Neural Networks is one of the most powerful tools in the present era of science. There has been a lot of research done to improve their performance and robustness while their internal working was left unexplored to much extent. They are often defined as black boxes that can map non-linear data very effectively. This paper tries to show how CNN has learned to look at an image. The proposed algorithm exploits the basic math of CNN to backtrack the important pixels it is considering to predict. This is a simple algorithm which does not involve any training of its own over a pre-trained CNN which can classify.

Viaarxiv icon