Alert button
Picture for Mikhail Khoroshikh

Mikhail Khoroshikh

Alert button

Is This Loss Informative? Speeding Up Textual Inversion with Deterministic Objective Evaluation

Feb 09, 2023
Anton Voronov, Mikhail Khoroshikh, Artem Babenko, Max Ryabinin

Figure 1 for Is This Loss Informative? Speeding Up Textual Inversion with Deterministic Objective Evaluation
Figure 2 for Is This Loss Informative? Speeding Up Textual Inversion with Deterministic Objective Evaluation
Figure 3 for Is This Loss Informative? Speeding Up Textual Inversion with Deterministic Objective Evaluation
Figure 4 for Is This Loss Informative? Speeding Up Textual Inversion with Deterministic Objective Evaluation

Text-to-image generation models represent the next step of evolution in image synthesis, offering natural means of flexible yet fine-grained control over the result. One emerging area of research is the rapid adaptation of large text-to-image models to smaller datasets or new visual concepts. However, the most efficient method of adaptation, called textual inversion, has a known limitation of long training time, which both restricts practical applications and increases the experiment time for research. In this work, we study the training dynamics of textual inversion, aiming to speed it up. We observe that most concepts are learned at early stages and do not improve in quality later, but standard model convergence metrics fail to indicate that. Instead, we propose a simple early stopping criterion that only requires computing the textual inversion loss on the same inputs for all training iterations. Our experiments on both Latent Diffusion and Stable Diffusion models for 93 concepts demonstrate the competitive performance of our method, speeding adaptation up to 15 times with no significant drops in quality.

* Code: https://github.com/yandex-research/DVAR. 12 pages, 11 figures 
Viaarxiv icon