Alert button

"Image To Image Translation": models, code, and papers
Alert button

Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation

Mar 12, 2022
Linfeng Zhang, Xin Chen, Xiaobing Tu, Pengfei Wan, Ning Xu, Kaisheng Ma

Figure 1 for Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
Figure 2 for Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
Figure 3 for Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
Figure 4 for Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation

Remarkable achievements have been attained with Generative Adversarial Networks (GANs) in image-to-image translation. However, due to a tremendous amount of parameters, state-of-the-art GANs usually suffer from low efficiency and bulky memory usage. To tackle this challenge, firstly, this paper investigates GANs performance from a frequency perspective. The results show that GANs, especially small GANs lack the ability to generate high-quality high frequency information. To address this problem, we propose a novel knowledge distillation method referred to as wavelet knowledge distillation. Instead of directly distilling the generated images of teachers, wavelet knowledge distillation first decomposes the images into different frequency bands with discrete wavelet transformation and then only distills the high frequency bands. As a result, the student GAN can pay more attention to its learning on high frequency bands. Experiments demonstrate that our method leads to 7.08 times compression and 6.80 times acceleration on CycleGAN with almost no performance drop. Additionally, we have studied the relation between discriminators and generators which shows that the compression of discriminators can promote the performance of compressed generators.

* Accepted by CVPR2022 
Viaarxiv icon

Energy-guided Entropic Neural Optimal Transport

Apr 12, 2023
Petr Mokrov, Alexander Korotin, Evgeny Burnaev

Figure 1 for Energy-guided Entropic Neural Optimal Transport
Figure 2 for Energy-guided Entropic Neural Optimal Transport
Figure 3 for Energy-guided Entropic Neural Optimal Transport
Figure 4 for Energy-guided Entropic Neural Optimal Transport

Energy-Based Models (EBMs) are known in the Machine Learning community for the decades. Since the seminal works devoted to EBMs dating back to the noughties there have been appearing a lot of efficient methods which solve the generative modelling problem by means of energy potentials (unnormalized likelihood functions). In contrast, the realm of Optimal Transport (OT) and, in particular, neural OT solvers is much less explored and limited by few recent works (excluding WGAN based approaches which utilize OT as a loss function and do not model OT maps themselves). In our work, we bridge the gap between EBMs and Entropy-regularized OT. We present the novel methodology which allows utilizing the recent developments and technical improvements of the former in order to enrich the latter. We validate the applicability of our method on toy 2D scenarios as well as standard unpaired image-to-image translation problems. For the sake of simplicity, we choose simple short- and long- run EBMs as a backbone of our Energy-guided Entropic OT method, leaving the application of more sophisticated EBMs for future research.

Viaarxiv icon

NPR: Nocturnal Place Recognition in Street

Apr 01, 2023
Bingxi Liu, Yujie Fu, Feng Lu, Jinqiang Cui, Yihong Wu, Hong Zhang

Figure 1 for NPR: Nocturnal Place Recognition in Street
Figure 2 for NPR: Nocturnal Place Recognition in Street
Figure 3 for NPR: Nocturnal Place Recognition in Street
Figure 4 for NPR: Nocturnal Place Recognition in Street

Visual Place Recognition (VPR) is the task of retrieving database images similar to a query photo by comparing it to a large database of known images. In real-world applications, extreme illumination changes caused by query images taken at night pose a significant obstacle that VPR needs to overcome. However, a training set with day-night correspondence for city-scale, street-level VPR does not exist. To address this challenge, we propose a novel pipeline that divides VPR and conquers Nocturnal Place Recognition (NPR). Specifically, we first established a street-level day-night dataset, NightStreet, and used it to train an unpaired image-to-image translation model. Then we used this model to process existing large-scale VPR datasets to generate the VPR-Night datasets and demonstrated how to combine them with two popular VPR pipelines. Finally, we proposed a divide-and-conquer VPR framework and provided explanations at the theoretical, experimental, and application levels. Under our framework, previous methods can significantly improve performance on two public datasets, including the top-ranked method.

* 10 pages, 6 figures 
Viaarxiv icon

Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation

Aug 11, 2020
Raul Gomez, Yahui Liu, Marco De Nadai, Dimosthenis Karatzas, Bruno Lepri, Nicu Sebe

Figure 1 for Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation
Figure 2 for Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation
Figure 3 for Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation
Figure 4 for Retrieval Guided Unsupervised Multi-domain Image-to-Image Translation

Image to image translation aims to learn a mapping that transforms an image from one visual domain to another. Recent works assume that images descriptors can be disentangled into a domain-invariant content representation and a domain-specific style representation. Thus, translation models seek to preserve the content of source images while changing the style to a target visual domain. However, synthesizing new images is extremely challenging especially in multi-domain translations, as the network has to compose content and style to generate reliable and diverse images in multiple domains. In this paper we propose the use of an image retrieval system to assist the image-to-image translation task. First, we train an image-to-image translation model to map images to multiple domains. Then, we train an image retrieval model using real and generated images to find images similar to a query one in content but in a different domain. Finally, we exploit the image retrieval system to fine-tune the image-to-image translation model and generate higher quality images. Our experiments show the effectiveness of the proposed solution and highlight the contribution of the retrieval network, which can benefit from additional unlabeled data and help image-to-image translation models in the presence of scarce data.

* Submitted to ACM MM '20, October 12-16, 2020, Seattle, WA, USA 
Viaarxiv icon

Delta Denoising Score

Apr 14, 2023
Amir Hertz, Kfir Aberman, Daniel Cohen-Or

Figure 1 for Delta Denoising Score
Figure 2 for Delta Denoising Score
Figure 3 for Delta Denoising Score
Figure 4 for Delta Denoising Score

We introduce Delta Denoising Score (DDS), a novel scoring function for text-based image editing that guides minimal modifications of an input image towards the content described in a target prompt. DDS leverages the rich generative prior of text-to-image diffusion models and can be used as a loss term in an optimization problem to steer an image towards a desired direction dictated by a text. DDS utilizes the Score Distillation Sampling (SDS) mechanism for the purpose of image editing. We show that using only SDS often produces non-detailed and blurry outputs due to noisy gradients. To address this issue, DDS uses a prompt that matches the input image to identify and remove undesired erroneous directions of SDS. Our key premise is that SDS should be zero when calculated on pairs of matched prompts and images, meaning that if the score is non-zero, its gradients can be attributed to the erroneous component of SDS. Our analysis demonstrates the competence of DDS for text based image-to-image translation. We further show that DDS can be used to train an effective zero-shot image translation model. Experimental results indicate that DDS outperforms existing methods in terms of stability and quality, highlighting its potential for real-world applications in text-based image editing.

* Project page: https://delta-denoising-score.github.io/ 
Viaarxiv icon

Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer

Mar 15, 2023
Serin Yang, Hyunmin Hwang, Jong Chul Ye

Figure 1 for Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer
Figure 2 for Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer
Figure 3 for Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer
Figure 4 for Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer

Diffusion models have shown great promise in text-guided image style transfer, but there is a trade-off between style transformation and content preservation due to their stochastic nature. Existing methods require computationally expensive fine-tuning of diffusion models or additional neural network. To address this, here we propose a zero-shot contrastive loss for diffusion models that doesn't require additional fine-tuning or auxiliary networks. By leveraging patch-wise contrastive loss between generated samples and original image embeddings in the pre-trained diffusion model, our method can generate images with the same semantic content as the source image in a zero-shot manner. Our approach outperforms existing methods while preserving content and requiring no additional training, not only for image style transfer but also for image-to-image translation and manipulation. Our experimental results validate the effectiveness of our proposed method.

Viaarxiv icon

Contrastive Learning for Unsupervised Image-to-Image Translation

May 07, 2021
Hanbit Lee, Jinseok Seol, Sang-goo Lee

Figure 1 for Contrastive Learning for Unsupervised Image-to-Image Translation
Figure 2 for Contrastive Learning for Unsupervised Image-to-Image Translation
Figure 3 for Contrastive Learning for Unsupervised Image-to-Image Translation
Figure 4 for Contrastive Learning for Unsupervised Image-to-Image Translation

Image-to-image translation aims to learn a mapping between different groups of visually distinguishable images. While recent methods have shown impressive ability to change even intricate appearance of images, they still rely on domain labels in training a model to distinguish between distinct visual features. Such dependency on labels often significantly limits the scope of applications since consistent and high-quality labels are expensive. Instead, we wish to capture visual features from images themselves and apply them to enable realistic translation without human-generated labels. To this end, we propose an unsupervised image-to-image translation method based on contrastive learning. The key idea is to learn a discriminator that differentiates between distinctive styles and let the discriminator supervise a generator to transfer those styles across images. During training, we randomly sample a pair of images and train the generator to change the appearance of one towards another while keeping the original structure. Experimental results show that our method outperforms the leading unsupervised baselines in terms of visual quality and translation accuracy.

Viaarxiv icon

Discovering Novel Biological Traits From Images Using Phylogeny-Guided Neural Networks

Jun 05, 2023
Mohannad Elhamod, Mridul Khurana, Harish Babu Manogaran, Josef C. Uyeda, Meghan A. Balk, Wasila Dahdul, Yasin Bakış, Henry L. Bart Jr., Paula M. Mabee, Hilmar Lapp, James P. Balhoff, Caleb Charpentier, David Carlyn, Wei-Lun Chao, Charles V. Stewart, Daniel I. Rubenstein, Tanya Berger-Wolf, Anuj Karpatne

Figure 1 for Discovering Novel Biological Traits From Images Using Phylogeny-Guided Neural Networks
Figure 2 for Discovering Novel Biological Traits From Images Using Phylogeny-Guided Neural Networks
Figure 3 for Discovering Novel Biological Traits From Images Using Phylogeny-Guided Neural Networks
Figure 4 for Discovering Novel Biological Traits From Images Using Phylogeny-Guided Neural Networks

Discovering evolutionary traits that are heritable across species on the tree of life (also referred to as a phylogenetic tree) is of great interest to biologists to understand how organisms diversify and evolve. However, the measurement of traits is often a subjective and labor-intensive process, making trait discovery a highly label-scarce problem. We present a novel approach for discovering evolutionary traits directly from images without relying on trait labels. Our proposed approach, Phylo-NN, encodes the image of an organism into a sequence of quantized feature vectors -- or codes -- where different segments of the sequence capture evolutionary signals at varying ancestry levels in the phylogeny. We demonstrate the effectiveness of our approach in producing biologically meaningful results in a number of downstream tasks including species image generation and species-to-species image translation, using fish species as a target example.

Viaarxiv icon

Semi-Supervised Image-to-Image Translation

Jan 24, 2019
Manan Oza, Himanshu Vaghela, Sudhir Bagul

Figure 1 for Semi-Supervised Image-to-Image Translation
Figure 2 for Semi-Supervised Image-to-Image Translation
Figure 3 for Semi-Supervised Image-to-Image Translation
Figure 4 for Semi-Supervised Image-to-Image Translation

Image-to-image translation is a long-established and a difficult problem in computer vision. In this paper we propose an adversarial based model for image-to-image translation. The regular deep neural-network based methods perform the task of image-to-image translation by comparing gram matrices and using image segmentation which requires human intervention. Our generative adversarial network based model works on a conditional probability approach. This approach makes the image translation independent of any local, global and content or style features. In our approach we use a bidirectional reconstruction model appended with the affine transform factor that helps in conserving the content and photorealism as compared to other models. The advantage of using such an approach is that the image-to-image translation is semi-supervised, independant of image segmentation and inherits the properties of generative adversarial networks tending to produce realistic. This method has proven to produce better results than Multimodal Unsupervised Image-to-image translation.

Viaarxiv icon

The Swiss Army Knife for Image-to-Image Translation: Multi-Task Diffusion Models

Apr 06, 2022
Julia Wolleb, Robin Sandkühler, Florentin Bieder, Philippe C. Cattin

Figure 1 for The Swiss Army Knife for Image-to-Image Translation: Multi-Task Diffusion Models
Figure 2 for The Swiss Army Knife for Image-to-Image Translation: Multi-Task Diffusion Models
Figure 3 for The Swiss Army Knife for Image-to-Image Translation: Multi-Task Diffusion Models
Figure 4 for The Swiss Army Knife for Image-to-Image Translation: Multi-Task Diffusion Models

Recently, diffusion models were applied to a wide range of image analysis tasks. We build on a method for image-to-image translation using denoising diffusion implicit models and include a regression problem and a segmentation problem for guiding the image generation to the desired output. The main advantage of our approach is that the guidance during the denoising process is done by an external gradient. Consequently, the diffusion model does not need to be retrained for the different tasks on the same dataset. We apply our method to simulate the aging process on facial photos using a regression task, as well as on a brain magnetic resonance (MR) imaging dataset for the simulation of brain tumor growth. Furthermore, we use a segmentation model to inpaint tumors at the desired location in healthy slices of brain MR images. We achieve convincing results for all problems.

Viaarxiv icon